Fellowship in review
Since tomorrow is my last day as a Knight-Mozilla OpenNews Fellow I feel the need to revisit this past 10 months to document what I did, what I left unfinished and some things I'd like to do in the future. An important note is that even if I'm going to talk mostly about projects, the most important takeaway from this process is the people I met, the experiences I lived and all the things I learned. Without further delay, this is what I've been doing during the last year:
The Coral Project
The organization that hosted me during my fellowship was The Coral Project, A Mozilla Foundation project in collaboration with the New York Times and the Washington Post created to build Open Source Software, research and tools to improve online conversation around journalism. As soon as I arrived to the Coral I found not only a super diverse and proactive team but also a great human group, making my job easier and more enjoyable.
The great thing (and also a little terrifying) of the fellowship perspective both from OpenNews and The Coral Project was the total freedom to do the work I wanted to do. They believed I could do interesting and creative things and they gave me the tools and the time so I can develop. Besides having a lot of time to experiment with ideas, some of them with good results, other that didn't work, my main job with the project was to help developing the core Open Source tools that the Coral Project offer to newsrooms of all sizes to improve their communities.
Trust is a tool that allows a better understanding of commenters behavior in your site by applying filters, creating dynamic user lists. The main idea is that by generating the user lists, filtering by customizable variables, you can detect behavior and engagement patterns. Once identified the user groups we can take actions like sending an E-mail to a user with a gift once it's comments are good enough for our criteria.
The development of Trust was already going on when I started my fellowship and I participated mostly in the development on the tool Front End. We use React and Redux for all the Coral Project Front End projects and Go, node.js & MongoDB for the Back End.
The first project I was involved from the beginning was Ask. As it's name says, the idea behind this product is to allow journalists ask their audience questions, manage the submissions and embed answer galleries to display in their website.
Because The Coral is a very ambitious project we can't waste time. Since we knew we will need to embed widgets in 3rd party sites for Ask and subsequent projects, just one week after starting my fellowship I started investigating good practices and new ideas around creating embeddable widgets and ended up with a blogpost, a ton of ideas thanks to Ted Han insights and my first steps with Preact, a framework that gave me a lot of satisfaction during the year (yep, a front end framework can give you satisfactions) and that I then used in Ask and personal projects. That's also how I met Jason who became a great inspiration on how to maintain an open project.
Since I've been playing with embeds it was natural to work with this aspect of Ask. That's how we worked mainly with Pablo Cuadrado on the embeds engine with more than great results. In fact we don't even really know how but the Ask embeds often load much faster than the host websites ?. Besides working with the embeds y spent a long time with the Ask Front End that lives next to the Trust one and it's called Cay.
Ask is in beta and we already had experiencies collaborating with newsrooms, for example for the president election in the United States.
The last big project for the Coral Project in 2016 and likely the most expected one is Talk. Just like Ask was built for unidirectional communication from audience to journalists, talk is about conversation. It is our approximation to the comments section on a website, but unlike any other comment system we want to create a configurable software so the comment box after your article stop being that and starts being something more appropriate and engaging to each audience.
Talk is under heavy development and we expect a beta really soon. I also worked on the core development for this project but as in Ask, I could focus at least for a while in a specific aspect I was especially interested in: Comment moderation. I developed the moderation system that was then integrated into the administration tool. For the moderation tool I focused on user input speed and allowing the tool to be used without connectivity or under restricted data scenarios. That's how I met the Offline first community, and learned a lot about the development of Progressive Web Apps.
Aas anything we build at The Coral Project, Talk is Open Source and the Source Code is available on GitHub.
Alternative Front End for Ask
When we started planning Ask, one of the architectural goals was to decouple the Front End from the Back End and from the widget generation (forms and galleries). One of the effects of that decision is the possibility to create new ways to build the forms without using the tool we developed from inside the Coral Project but just our Back End API.
Trying to showcase what is possible using the Ask API and also applying some tricky I learned while working on the Natural Language Processing aspect for GuriVR, I built an alternative form generator that instead of using the typical widgets drag & drop, it tries to engage in a conversation with the journalist, anticipating some things and making it reflecting about the questions asked and the input type of each question (E-mail, Number, Long text, etc). The code is available in GitHub and I also talked a little about the input type inference on a Source post.
Ask WordPress Plugin
WordPress plays a big part in the online news scenario and that's why creating integrations for our products is a priority. I built this integration to embed Ask Galleries and Forms and also being able to do administration tasks without leaving the WP admin. The plugin and it's code will be available soon.
After a conversation with people at BuzzFeed Labs, when they showed us their BuzzBot I thought about doing an Ask Bot that, when configured pointing to an Ask instance it talk with users that want to complete questionnaires.
The code is available on GitHub and I'm more than happy to help you implement it next to your ask implementation ?.
During the first"Coral Week" at the Washington Post HQ, y took some hours to prototype a native mobile app for Trust, the first Coral Project product using React Native and also adding some proof of concepts for moderation in Talk. For example the above gif shows "Quick moderation" that allows comment moderation using our products APIs to moderate comments in a tinder-like fashion.
I killed this project after that week since it wasn't aligned with the Coral roadmap and there were other priorities but it was pretty fun and a very good proof of concept on what can be done without modifying the core code from Trust. The other takeaway from this project for me was a proof that I could experiment with freedom at the Coral thanks to the support of the team, especially from Andrew and David.
As mentioned before, one of my interests while working with Talk was related with comments moderation. Before even starting formally with the project I had some time to research and prototype a moderation app. This app never saw the light of day but most of the concepts were included into the current Talk moderation interface.
The codename for this project is Muddy (a tribute to Muddy Waters) and it's what is called a Progressive Web Application. The idea was to build a moderation app that could be used not only by moderators but also by journalists for example while riding the subway. I had 2 main goals at the time I developed it:
- It must work under bad connectivity and offline scenarios
- It should allow different types of input so users can emit moderation actions efficiently under diverse scenarios and use cases
To make the app work offline I used new web standards to make a "Progressive Web Application" that can cache resources and sync when the user is online the next time. On the input types side I worked with Speech Recognition, Keyboard Shortcuts and Touch gestures. I also combined what I did with "Coral Native" adding a "Swipe mode" with Tinder-like cards to make moderation more approachable.
I mentioned some decisions taken for Muddy on my BAFrontend Progressive Web Apps talk.
Personal project and other experiments
If there's a project that identifies my fellowship, then that is GuriVR. It was born from a question I asked myself during the 2015 MozFest after seeing great Virtual Reality related projects.: How far can we simplify the prototyping for VR experiences? Can we make it so a journalist (or any non-graphic-programmer) could do VR? This question mutated into a couple of prototypes before taking the final form of a Virtual Reality online editor using natural language as the input both in English and Spanish, a Twitter bot that takes tweets and returns a VR scebe, a Slack bot and many more things.
I wrote extensively on the technical side of the project, but there is another important aspect of the project that is tied to the Fellowship spirit on working with this project as I was interested in new things during the year and as you can see in the chart below my interest fluctuate a lot
This project is tied to my interests (and other people requests) but it also helped me test some ideas I had during the year, and that's why even if I worked from time to time during the whole fellowship on this project there were 2 specific nights where I wrote the key parts of Guri:
The first one was the night of the March Hacks/Hackers New York meetup. I had the idea of doing simple VR for everyone in my mind but I didn't have a clear way to implement it. During that Meetup, Sisi Wei from ProPublica showed how they translate articles to different languages using Google Docs and structuring the translations with ArchieML. That same night, after asking some questions on the matter to Mike Tigas, I named the project and started the first version. "Write your story on a Google Docs, add annotations with ArchieML and get your VR piece":
The second night was the one before presenting Guri to Miguel Paz CUNY J-School class. At this point I had this project that took an annotated Google Docs and turned it into a VR scene, but I felt there was still a big learning curve for people not used to structure data (and I also wanted to impress the students). So that same night I started writing a parser that takes natural language from the Google Docs (instead of ArchieML annotations) and transforms it into a VR experience. The code I wrote that night is practically what we have now in the NLP side for GuriVR.com.
Just as those 2 nights that changed the course of the project based in new ideas I had at the time, there were many more nights and days that piled more features into what you can use now on GuriVR. That hands-on spirit, that was transmitted from OpenNews, helped me develop my creativity and my interests on this project that now has many different use cases.
As an extra, Guri made me start participating on the A-Frame community, the VR Framework developed by Mozilla, and also their creators that were really helpful during the whole process.
Guillermo showed me the main concepts for this Universal Front End Framework many months ago. That's why after spending many hours configuring a new project I asked him if they had plans to finish implementing the framework. That's how I started contributing to Next.
This is one of the projects I enjoy both contributing, since I learned so much from the amazing Zeit team and also benefit as a user, a win-win.
During the Fellows work week in Buenos Aires, After Martín showed me the Inception app for Android, It occurred to me to work on Isopo, a web technologies version of that app adding configurable audio filters. The idea is that to make Augmented Reality is not required to use the user camera, we can just listen what's coming from the phone microphone, apply some filters and feed it back to the ears, modifying our reality perception.
StoryTeller is an experiment that uses speech recognition technologies to interact with WebVR, allowing VR exploration using the human voice.
I didn't finish this project but it helped into the development of other GuriVR features and showcase the posibilities we have to experiment in the VR space.
A-Frame Chartbuilder component
ExperimentingE with dataviz in 3d I created this A-Frame component that takes charts generated with ChartBuilder and renders it into a WebVR scene. This experiment talks about the interoperability between the WebVR world and popular web libraries such as d3js.
Shooting 360 at Coachella
I was working on WebVR projects involving a lot of 360 video but I've never filmed in my life, that's when Miguel told me he had a student that was interested in filming with his new 360 camera and I met him immediately.
Joseph told me he was about to go to Coachella to film in 360 so I got a press pass and we went to film to the festival. This was one of the experiences I couldn't have in a different context and that helped me understand much better the content producers I worked with.
Bla Bla Bla | Talks
WebVR & GuriVR
Offline-First & Progressive Web Applications
- Encryption andoffline access for your site @ Africa Media Party
- Progressive Web Apps @ BAFrontend
- Offline-First Panel @ SXSW 2017
- Virtual Reality for the rest of us
- Enabling Offline First Experiences on the Web with Service Workers
- Universal Rendering with Preact
- Exploring new techniques on building composable widgets for the web
- Low-Budget Natural Language Processing
- Introduction to Preact and Webpack screencast
I spent such a great time with my Fellow OpenNews Fellows, and as they say, I had the honor of introducing them into the Alfajor world.
See you soon!
I'm ending the fellowship and new things will start. I will be back to talk about my new (for now uncertain) adventures.
See you soon ?