Memoways is very proud to be part of the 5 startups selected for the Swiss ICT Newcomer Award 2014 !
Good luck to all our colleagues for the final nomination that will happen on the 5. November 2014 in Lucern.
More news to come soon…
This post will unveil some cool features that will come down the road and open new perspectives on how to make the most out of our platform.
Lets begin with the front end applications - where the end user can interact with the content prepared by the content producer.
The new presentation of the movies created with Memowalk
We are working on the “bubble remix” feature, enabling the users to dynamically remix the video stream just by playing with metadata (tags, taxonomies etc) in a very visual way. Basically, it’s setting up a filter and then perform it in real time on the assets contained in the Beam. We plan to add a feature where it will be possible to change the order of the assets manually in the timeline (roll view) to manage the right placement of the assets.
So there will be dynamic filtering and manual ordering of the assets available for any user, enabling a precise and intuitive remix.
The other feature we are working on is the capability to name and save the remixed Beam and to share it on the web, connected to the account of the user.
One additional feature we plan to integrate is the possibility to remix various Beams together – add Beams to a selection and remix through various data sets of different content producers. This opens a big door on original remixes and surprising combinations through various content collections…
For the moment the user sees a list of Beams that are available to download on the mobile device. We plan to show the Beams that where created by the user (as a private feed), enabling him to create personal sets of content before “walking a movie”.
For example the user can pre-filter with the Beam remix the content that is interesting for him, name and save his own Beam and then download this Beam in the Memowalk mobile application.
One other very important feature we plan to add is the capability to create user generated content while listening to a story. The user will be able to add pictures or videos connected to the story; at the end of the walk the content will be uploaded and connected to the main story.
A monitoring service will prevent that users publish inappropriate content; the users will also be able to edit their own content and make it available for others, opening the way for contextual and collaborative content creation…
Let’s look now at the back end tools, that will be used by our clients to create, manage and publish interactive and original projets.
Michi, the administration console.
To be able to manage usages, users and the projects, Michi is offering administration features that will be accessible directly through the main Memoways website (login to your account).
Michi is already offering:
- creation and invitation of users to your workspace & projects
- creation / renaming of projects
The following features we will implement in Michi will be:
- administrate your account (billing)
- statistics and metrics
- settings of the project (information, visuals)
- management of the published Beams (categories, comments, publication)
- management of the published walks (publication, promotion)
Further on, we will add:
- creation of algorithms through simple drag&drop of predefined calculation modules with user definable variables
Tansa, the indexing application.
Our OS X application is the central part of the platform, as it is there that the content producer will upload, index and administrate the content.
The actual features of Tansa are described in this page; we are working towards adding following things:
- interface & workflow optimizations (better interaction design)
- connection people of a contact database to assets through roles (connect the protagonist or actor to a video etc)
- create and set Facets through the Schema Editor
- connecting assets to POI’s or other maps through XY coordinates, by using your own bitmap picture to map assets
Other things we would like to add:
- enhanced video capabilities (basic editing, filtering)
- cloning assets with subsets of specific Metadata (having several versions of the same asset in a project)
Web services, APIs.
This is for power user and developers, enabling them to create their own usages on top of our technology.
The actual APIs are described here; we are working toward adding following features to our Memoways Cloud Services:
- building our own video transcoding platform
- accepting uploaded resources from mobile and web sources
- push notifications
Memowalk is our innovative smartphone application to “walk your movie”.
Walk your movie ? Yes, while walking you get automatically a movie based on footage coming from the place you are going through and which adapts the form of the narration to your behavior…
You walk fast ? You get a fast edit based on short clips. You stand still ? The story will be build upon longer clips, like interviews.
We made a short video to explain some of the possible usages that can be made with this generic application.
In short, Memowalk creates a personal story based on the interests, the localisation and the behavior of each individual user. It all depends on the content, the metadata (indexing of the content) and the settings of the editing engine.
It will be possible to get branded and personalized versions of this application.
Memowalk will be published as a free app on the App Store in May – until then we will tell more about the features and the user experience in a forthcoming post on this blog. Stay tuned !
Thanks to the team of C-Side Productions for providing the shooting material (shot on RED EPIC) and the team, to the protagonists and Sinner DC for the music. This video wouldn’t exist without Diogo Costa how was second cameraman with his Canon 5D and who did the editing & compositing.
We are developing a powerful generic platform to make video storytelling natively compatible with the web; but that innovative tool will only make sense in the hand of creative people with original projects.
On the technical road full of programming languages, we are fueled and inspired by several exciting projects that will come to the world in the next months.
Here are short introductions to selected projects we are developing with various partners around the world; it’s the first selection, other projects will be presented in the next weeks.
ZERU is an innovative transmedia project. An interactive graphic novel takes the reader on a personalized journey through a compelling story / thriller about the plight of Albinos in Africa, while an online documentary platform showcases real-world footage, facts, interviews and other relatable content.
Memoways is the close technical and conceptual partner on this project. We are looking forward to bringing this project to life, with the aim of creating a concept that can be used by others.
The “Visions dans le réel” project is a transmedia platform that empowers it’s users to compose unique and surprising movies based on existing audio-visual fragments.
In the case of this project, it will be mainly footage from documentary movies shown each year at the festival «Visions du réel» in Nyon.
The aim is to connect users to the audio-visual memory of a specific space – in this case Nyon with the festival – by being able to read & write stories within this shared memory.
It functions the following way: the user generates his/ her movie with the help of our smartphone app that translates in real time the form of his/her path into a narrative playlist of previously geolocalized media files.
First comes the interactive experience where the user (walker) hears the sound of the movie (walk and listen), then comes the reception where it is possible to watch the resulting personal movie (watch and listen).
Each trace and resulting movie is connected through the user with the initial footage: a network of people, memories, usages and places is creating a world of organically changing and evolving stories.
Why wouldn’t it be possible to keep track and gain access to interesting stories, taking advantage of the dematerialization of content ?
Why wouldn’t it be interesting to connect visions together and create links between questions, stories, people and locations ?
Why wouldn’t it be stimulating to grant independence to images and see that they can tell us multiple stories according to the various contexts they can be embedded in ?
Why wouldn’t it be valuable to create a one to one interaction scene where authors and spectators can mix and remix their mutual exchanges ?
As mentioned in previous posts, we are working hard on bringing all the components of our platform in the hands of our clients and of their end users as soon as possible – and we’re stepping closer and closer to that point.
This post gives an overview on the various applications and services we will release this spring.
Please note that the screenshots are provisory captures of our work: the final look & feel will be different, with a more unified and simplified design.
More detailed posts will follow to go deeper into the details of each component.
Memoways is all about responsive storytelling: every story within Memoways is generated through the interactions and the behavior of the end user.
That means that every story is unique, tailored to each individual user.
The mobile app: Memowalk
Using the Memowalk mobile application, you can compose as many movies as you wish: each personal walk will generate it’s own unique movie based on previously geolocalized footage. Like an Audio-Walk, you hear contextual stories but you get more: the liberty to walk how you wish, «driven» by your immersive experience !
At the end, you get a unique and surprising movie that you can share with your friends.
Memowalk will be available on the iOS App Store as a free app but can also be branded for each of our partners.
Screenshot of the record screen: the movie is generated and visualized in real time
The web based remix tool: Memobeam
A beam is a dynamic playlist that can be remixed. Just by clicking on keywords, the user can remix a whole set of footage in real time and create his own story.
Video will never be static again: like an evolving feed of content, the beams will keep your content live !
The various beam components (video player, remix & grid view) will be available through embed code.
Screenshot of a beta version of our “bubble remix”, where interacting with tags results in a remix
The native OS X indexing software: Tansa
For the moment this application is only available through a direct request – but we are working towards a distribution on the App Store.
This lightweight and fast software will allow you to to upload, index & create original projects:
Screenshot of the main interface, with the indexing of a project with “subjective” metadata
The web based admin console: Michi
The administrator of a Memoways account will be able to:
Screenshot of the login into Michi
The Memoways Cloud Services are based on a scalable server, able to communicate with each component in a generic albeit flexible way.
We made a lot of speed and automatisation enhancements, making future additions over this back end future proof.
More detailled information will follow in a coming post.
The biggest change in our APIs was made around the Beam and Facet concept. More to come on that one later… (because it’s important to get the whole story to get the feeling of the power & potential that will come with those APIs).
Since the middle of 2013 we are talking about a new concept: the Beams. Sounds like a new music band, wildly throwing images round to the public ?
In fact, the public outcome of this concept is the solution we call “Memobeam“.
A Memobeam is a smart playlist that can be remixed.
Think of your smart playlist in iTunes, or any other music platform: your can set up a dynamic playlist based on, let’s say, english rock music of the nineties. It’s a smart playlist because when you add new music fitting in that category, those new songs will be automatically added to your playlist.
Technically, a Memobeam is a data feed, like RSS or a twitter feed, which can be viewed through an URL.
This feed is containing video assets and metadata information. The specificity of a Memobeam in regard of a RSS feed is the fact that the Memobeam is structured through a query made in the Memoways database.
Within our Memoways OS X application or the coming web admin console, the project manager can save a specific query (like: find all assets that where created in the 60′s and talk about labor in Paris). This saved query can then become a Memobeam, with an URL that presents the filtered content on a website.
Once the content is beamed it can be seen, played, remixed and embedded.
The video player we designed specifically for this purpose is playing the assets (the non edited footage) one after the other in realtime, like… a movie.
The user is also able to watch a movie that is automatically generated out of a structured output of the database of a content creator / owner.
Memobeam is a movie generator in the hands of “simple” web users (in fact, all of us). That rocks !
As the Memoways OS X application is focusing on adding lot of information on the content through metadata, the indexed assets are becoming “intelligent” and potentially “independent” (of an author).
The metadata that describe the content are not only answering questions like “who” did the video, “when“, “how” and “where” was it shot – but it also add’s some subjective information on the content, like tags (simple keywords) and structured taxonomies that describe what the content is about. It’s with the help of all those words that the semantic connexions between the assets will be managed in an automatic and algorithmic way.
The end user sees only the significant metadata (themes, authors, date, location, tags etc); by clicking on the metadata, the user basically filters the actual Memobeam with those keyword(s). The list of filtered assets is immediately updated and it’s possible to watch the resulting movie.
In order to manage a playlist that is more than a filtering of the Memobeam, we will add in the near future the advanced feature of editing through a timeline.
It’s not only the type of content, but also the placement within a playlist that will be in the hand of the final user.
It is then possible to embed the remix of the Memobeam as an independent Beam within any blog or website, Facebook etc…
Our Memobeam solution is targeted towards content creators and content owners who would want to give access to their original footage, in an interactive and playful way.
Today the amount of created content has explosed (and the trend is going on): why not use and leverage the “missing” content ?
From the Memoways website: Realize your content’s true potential
Estimated 95% of produced media is never published. This happens not because of mistakes or lack of effort, but due to the simple fact that editors have to deliver a product of a very specific length and style. Be it video, photo, audio or text, most of what is created for a single project goes to waste.
Memoways seeks to tap into the potential of these unused documents.
Instead of measuring a media project’s value in terms of an end product, our platform puts the emphasis on having a vast database of raw media. Everything is catalogued and tagged with great detail, allowing the Memoways innovative Cloud Services generate an endless amount of full-fledged end products, tailored to the project’s demand in length, style, genre and much more.
The best thing is that, while the raw media has to be catalogued only once, your assets can be used infinitely in other projects – either by its creator/owner or anyone he chooses to allow.
The first usage is to present a coherent collection of video footage on a website. A structured collection, but still open, expandable, remixable – in short: a living collection.
Today when we shoot a video we create files. Those files are then integrated and structured (edited) within a timeline – a time consuming and intensive work. At the end, the movie is exported and compressed for the web: the result is a file that has lost all the context on the initial content. To be found, to get viral and “live” on the web, context has to be added again, manually. Some keywords are attached sometimes with cue points or interaction markers.
It’s like building a website only with PDF’s: if you want to update a part of the text or add an image, you have to go back to your editor, export, upload and relink.
Anybody is working like this ? Yes, people that deal with video…
Video has to be treated as full time citizen of the web, and not as an alien (known as the black hole on internet, or the web video problem).
The Memobeam solution we provide out of the Memoways platform is leveraging your initial content for multiple usages, like the one described here. But a Beam can also be integrated within our other solution: Memowalk.
Your assets will live multiple and complementary lives… the more people get in touch with your content in a personalized and original way, the more benefits will come back to the content creator or owner.
We are still working on the layout and the features – but we expect that for Q1 2014 the first Memobeams will make some noise on the web !
After a good amount of work, our mobile application Memowalk is on the way to get tested. It’s a big release, that will uncover the potential of the Memoways platform and the underlying concepts.
In a few words: the Memowalk mobile application generates a unique and surprising movie out of a recorded path.
You just walk through a neighborhood and our smartphone app tracks your progress and translates your itinerary into a story drawing from the multitude of virtual information held in the ‘augmented space’. All this in realtime! Once your trajectory is translated into a movie you can watch it and see the movies of other people.
Here are the main features of this application:
Please note that the screenshots are not showing the final interface, design and features; as always, changes are already happening… it’s really the first glance on our mobile application we are very proud of. Thanks to Nicolas Goy for his incredible work, both on the iOS and server side.
There are more important and original features we are working on, but those will be revealed in 2014.
We plan to publish the application on the appstore begin of 2014, so stay tuned…
ZERU is an innovative transmedia project: an interactive graphic novel enables the reader to get an individual and immersive journey through a linear story, while a documentary platform is dynamically providing real world footage, bringing facts, interviews and other kind of content relevant to the cause.
The originality of this transmedia project is the bi-directionnal and personalized connection between the two mediums: the interactions of the reader within the interactive graphic novel will let him stay into his own narrative path through the story, while creating a trace that will trigger an individual surprise coming from within the documentary platform. More on that later.
The synopsis of ZERU:
Set on a business trip to Tanzania, a restless Afro-American financier, WALTER, is confronted with the horror of the lives of Albinos, persecuted and murdered by sorcery networks. His path will cross that of BADU, a 10 years old child, hunted like an animal because of the color of his skin. Called to choose between indifference and action, he’ll see his routine trip soon turn into a terrifying nightmare. ZERU is a dark and suspenseful thriller, a merciless hunt that reveals the full horror of a situation rarely tackled by the medias.
While stepping through the chapters of this drama, the reader can interact with visual elements: symbols, parts of the image can be clicked on. The subtle interactions preserve the immersive narrative experience while personalizing the way the story unfolds. The user will be able to interact with some specific visual elements that will change and adapt with sounds and colors.
The reading experience is exploring the native possibilities of the medium that enables it (the tablet or the smartphone): the real innovative and original part of the experience is the connexion to the documentary web platform, through a surprise that will be generated out of the interaction path of each individual reader… This surprise, at the end of the reading, will be the generation of an unique and personalized movie.
This movie will be a starting point for a new exploration of the cause: stories coming from the real world, accessible through this individually generated movie but also by navigating the linked documentary elements over a map, a playlist…
We had the opportunity to pitch the project to a dozen people, coming from the publishing, broadcast or narrative industries. The feedbacks where all very positive and enthusiastic: the story, the esthetics and the interaction concepts made sense for everybody.
On top of that, the project got a prize: “The ITVS Impact Pixel Market Prize for the Best Social Impact Project was awarded by Jim Sommers, svp content management at ITVS to producer Dan Wechsler, and project manager Ulrich Fischer for Zeru.”
With this financial help and the symbolic impact of the prize we will be able to push the project on step further, in order to get production money to achieve a demonstrator for Q3 2014.
Memoways is the close technical and conceptual partner on this project – we are looking forward bringing this project to life, with the aim to create a concept that could be used by others.
We will roll out lot of new stuff until the end of this year; following lines give a first glimpse on what is coming…
Beams are coming: MemoBeam!
This one will be the biggest new feature.
Imagine an open, interactive & dynamic video feed, that allows the user to navigate through a coherent list of assets, previously created by the content owner.
It’s like having access to an open timeline and to be able to remix, aggregate and embed the result of your own timeline (your filtering choice).
A typical Beam would hold between 10 to several hundreds assets (video pieces, rushes) that have a semantic coherence (like a unique theme with a specific localization).
The biggest difference with what YouTube and others are providing ? The Beams give access to individual assets (and not to closed movies), allowing anybody (from the content owner to the end user) to generate he’s own combination (edit) of those assets.
The editing was never that simple!
The magic? Using words to describe, connect and then combine content into a story… And of course with a custom HTML5 video player.
But that’s not all. Imagine this:
- Beams on a map: the assets will show up on a map, where it is possible to see & play them. By filtering and selecting assets in your way, the map will output… a movie!
- And more surprises: mix & mashup Beams; User Generated Beams…
Brand new mobile application: MemoWalk.
We already have the version 1.0 of our mobile apps (for iOS & Android); the version 2.0 that is under development is a complete rewrite with lots of new features. Like:
- Connect to Beams: get the content that is near the place you are in; get the type of content you are interested in; get access to premium content…
- See the content on a map: the available content is shown on a custom map; you can play the content directly from the map;
- Improved editing engine: with access to more metadata and an advanced, customizable algorithm: the resulting movies will seamlessly fit into your immediate context and adapt to your behavior through our own behavioral pattern algorithm;
- User creation: connect with your Facebook or Google+ account or simply create a user with the Memoways login
And: new design (yes!), Bubble Beams ®, thumbnails, user preferences (language, type of content) etc
Memoways Admin Console: Michi.
Please welcome Michi, our responsive web administration console. Michi means “street” in japanese: your memo way is possible on michi !
With help of this administration console, our customers will be able to:
- administrate the content (the assets), the usages (see our solutions) and the taxonomies (lists);
- create Beams through smart filters; edit & update the Beams;
- get access to statistics and analytics (get you own data usages for the Pay-As-You-Go service);
- administrate the users, editors and team members;
And a huge breathtaking feature: build your own mobile app directly out of Michi, get a mail invitation to install your own app on your mobile device…
Creating your own mobile app’s with your own content was never this simple!
Memoways OS X: Tansa.
Our editing and indexing software will get following new features:
- taxonomy editor (build your own lists);
- Beam Creator (generate your Beams directly out of the application);
- advanced editing features (group editing; weight editing; conditional editing)
By the way: it’s the big brother of Michi, so we called it Tansa. Hello, Tansa ! Michi being the street (the way), Tansa is the inquiry, the question…
Memoways Cloud Services: the pulsing heart of your memory.
The central core of the Memoways platform are our Cloud Services.
- update of our API’s: more power & access to the content & intelligence (metadata);
- update of the web-SDK to be able to “Beam” and to present Timelines (Walks and other Playlists) in an open and embbedable way
But most of the new features of the Cloud Services will not be visible as such: they allow the integration of the Beams and all the other new things that are coming down the road, not so far away anymore.
And that’s not all: as we’ve got a solid financial support lately, more surprises and new features are being integrated in the Memoways Platform, so stay tuned…!
The CTI research projet “Move Your Story” we are working on in partnership with the TAM team at the University of Geneva is evolving and providing first valuable results.
To illustrate the outcomes of this project we did a short video:
Memoways provides a mobile application that creates movies according to the displacements of a person. The “Move Your Story” project will improve the Memoways platform by taking into account the behavior and the current interests of the user. This information will be obtained by analysing the data produced by the different sensors of a smartphone.
Here are two screenshots of 1) the interface of the android app we are using to get behavioral pattern within time, and 2) a presentation of results.
We will include this algorithm within our already existing Android mobile application for Q4 2013.
- map: see assets (content) on the map; get own position and trace; see the recorded path on the map with the chosen audio files;
- audio: play individual audio files (on the map / through a list)
- use scenario: choose the “genre” or type of content you wish to get (drop down list with 2 – 4 choices).
- improved interface (UX & design)
- integration of the behavior algorithm & improvement of the editing engine
- integration of new taxonomies & metadata to improve the editing engine
- more to come…
Here are some screenshots from the other developments of the Android version of our mobile application (map, audio & use scenario).
Rue de la Coulouvrenière 8
1204 Geneva / Switzerland
+41 22 800 38 91