Categories
Projects@Work

AI Literacy Foundation for India

Getting started with some temp sites for my colleague Saleem’s work setting up an amazing foundation in India that is focussed on AI literacy and computational thinking for children in all schools including low cost private and all government schools.

Home

India is home to the largest education system in the world, with over 270 million K–12 students and nearly 250 million youth aged 15–24—a population almost equal to that of the entire United States. This young demographic represents extraordinary potential, but today that potential remains largely untapped. Roughly 45% of Indian youth are unemployed or underemployed, not because they lack talent, but because they lack access to the skills that the modern workforce demands.

The urgency could not be greater. Artificial Intelligence is rapidly transforming every sector—from healthcare and agriculture to finance and education. Yet, India’s students are not equipped for this new reality. Computer Science (CS) and AI education remain luxuries, accessible primarily to urban, English-speaking children in private schools which are approx 31% of the overall mix, while the vast majority i.e 69% of India’s students enrolled in government schools —many of whom learn best in regional languages—are left behind.

The good news is that momentum for change is building. India’s National Education Policy (2020) and the National Curriculum Framework (2023) both emphasize computational thinking and AI literacy as essential twenty-first-century skills. The time is ripe to ensure that every child in India—not just the privileged few—has the opportunity to learn the language of the future, however domestic solutions to meet this opportunity do not exist yet,  presenting an opportunity for world-leading established solutions to be localised and extended to help deliver the learning outcomes.

Indian classroom tablet being used for computational thinking education curricula
Categories
Projects@Work

Code.org Teacher Training with Pies Descalzos in Colombia

Had the opportunity to visit the marvelous cities of Cartagena and Barranquilla Colombia to participate in a joint teacher training with 4 cohorts of elementary school teachers, led by the Pies Descalzos foundation and sponsored by our donor to Code.org, Amazon Future Engineer.

Shakira’s foundation has been very active in building several model schools with very strong infrastructure in marginalized communities in several cities. We visited some of these schools and I was moved to hear the administration describe how the entire community had been uplifted by the presence of a modern school with meal services, playgrounds, performing arts centers, and libraries and computer labs.

Our facilitators led 2 day workshops for teachers that have had zero prior experience teaching computational thinking and computer science. It is very inspiring to see how quickly the teachers get the concepts and their genuine engagement in the potential of these curricula for their students.

Categories
Projects@Work

Code.org Global Multi-Lingual Marketing Pages

A fun mini-project: turns out despite 10 years and ~500m+ pages served a month, many from non-english speaking teachers and students, the current code.org website in 2024 has been constructed with an in-house CMS system that has unintentionally been SEO-hostile to google-bots.

So, as a first step to unveiling our amazing work delivering computer science, computational thinking, and artifical intelligence education to millions of kids in over 100 countries all over the world, today i hacked together a new microsite that surfaces details of our programs and courses in Spanish, Hindi, and Korean. This will be followed with as many as 30 additional languages, as I study the webcrawling and search traffic patterns, to best connect these searches back to our curriculum and “app”, the Code.org LMS (courses, teacher administration, professional development, and actual classroom activities for kids).

The code.org materials in Spanish are comprehensive and delightful and edited by yours truly with help from Karina my colleague, as we are both Chilenos!

This has been super fun project as I can see immediate impact with traffic in multi-lingual queries, reaching the site and much of it converting to sign-ups and create-new-account buttons. Kudos to Webflow, such an amazing website building tool, has been a pleasure to use.

Over the period of the project the search volume was pretty good and in particular the average search position was stellar (~5). However despite tagging these pages with the language code and native content in Thai, Italian, Tamil, Hindi, etc. — the primary domain of code-org continued to receive significantly more traffic even though it was serving english page results to Thai/Tamil/etc. queries. Here is the volume of search traffic to the microsite:

And by comparison, look at just the search traffic from one country Thailand, in that same period:

So quite an interesting experiment. Having a native Thai landing page, made a dent (580k impressions v 3.13 to the primary domain english page), but the click through rate was 1.6% (to the Thai page) vrs. 43% (to the english page on the primary domain).

The take-away for me is that (a) more time was needed to establish authority of the subdomain, (b) using a subdomain a huge disadvantage especially on a 10 year old, trusted main domain.

The good news is today we are ready to fold this traffic back into our primary domain and have staged new marketing website pages using a new CMS (Contentful), and that system is connected to an awesome new TMS (Translation Management System) called Localizejs. With this new TMS we are going to radically reduce our cost and time to market, and most importantly, extend coverage of translation into 22 languages for more of our student curricula and thus reach more students and teachers.

Here’s the new student marketing landing page for code.org in Thai, on the main domain. Now Thai queries should rank for our https://code.org/th/global.

For 2026 we are starting to focus on India as a very strategic market for us to expand computational thinking, computer science, and AI literacy education of K-12 students. Offering these materials in Hindi, Kannada, Tamil, Marathi, and Telugu will now be possible thanks to our amazing new AI translation workflows using Localizejs.

And here’s a general overview of all of Code.org’s programs in India.

And here is a complete list of the initial 22 languages we are supporting with this new translation platform:

Computer Science and AI Literacy for kids in Arabic
Computer Science and AI Literacy for kids in Bahasa Indonesia
Computer Science and AI Literacy for kids in Chinese (Simplified)
Computer Science and AI Literacy for kids in Chinese (Traditional)
Computer Science and AI Literacy for kids in Czech
Computer Science and AI Literacy for kids in Farsi
Computer Science and AI Literacy for kids in French
Computer Science and AI Literacy for kids in German
Computer Science and AI Literacy for kids in Hindi
Computer Science and AI Literacy for kids in Italian
Computer Science and AI Literacy for kids in Kannada
Computer Science and AI Literacy for kids in Korean
Computer Science and AI Literacy for kids in Marathi
Computer Science and AI Literacy for kids in Polish
Computer Science and AI Literacy for kids in Portugues
Computer Science and AI Literacy for kids in Slovak
Computer Science and AI Literacy for kids in Spanish (LATAM)
Computer Science and AI Literacy for kids in Spanish (Spain)
Computer Science and AI Literacy for kids in Tamil
Computer Science and AI Literacy for kids in Telugu
Computer Science and AI Literacy for kids in Thai
Computer Science and AI Literacy for kids in Turkish
Categories
Projects@Work

Code-org in LATAM

I’m so excited to have joined code.org to help with the mission of nurturing global access to computational thinking and computer science education, for every kid, in every school.

I’ll be writing more but wanted to start with gathering some photos from our work in LATAM region, my colleague is in Belize this week conducting teacher training and when she shared some snaps, I was very moved emotionally. I was in Belize in 1993 back-packing after college, and was flooded with happy memories and a strong sense of mission/impact for the work I’m now doing every day.

In April 2025 I had the pleasure of visiting Barranquilla and Cartagena Colombia as part of our collaboration with the Pies Descalzos foundation, of Shakira (yes, the Shakira) fame. The foundation does lovely work building modern schools and inspiring the communities and local governments to rally around improved education and arts programs for kids in government schools.

In May 2025 I combined my personal vacation with my son to norther Peru, with some collaboration with our partner in Peru, Code en mi Cole, who conduct Hour of Code workshops and teacher training for CS education across the country. In Lima we attended a symposium on the future of work that discussed how AI and technology is disrupting the skills required for the modern jobs across many sectors of the economy of Peru. And Code en mi Cole held an event on tech education at the Museo Metropolitano de Lima where we met several city and national government officials and presented code.org to ~100 teachers who attended to learn more.

In Lima we joined 2 tv programs to share Code en mi Cole’s plans for computational thinking programs and I got to practice my pitch in Spanish! Hilarious, if you ask me. Go ahead, laugh!

And another interview, this one i think is only on cable itself, and here on facebook: https://www.facebook.com/share/v/1BVUZ2JciL/?mibextid=wwXIfr

Hopefully both Colombia and Peru will continue to implement education for technology and AI literacy in their national, broadly accessible and inclusive programs in government public schools.

Categories
Projects@Work

The High Guide: a Woman-first Psychedelics Podcast

The High Guide is a podcast I’ve been enjoying hosted by my Seattle friend April Pride. April is super passionate about creating space for women who are curious to learn about entheogens and changing their lives with cannabis and psychedelics in particular.

Having read and consumed a lot of media on psychedelics in the last year I must say I’m struck by how male dominant the space can feel, I’m sure April’s approach will enhance empathy and understanding for women seeking voices of women in this space.

Her YouTube channel is a great place to grab the many podcast episodes on psychedelic mushroom types, microsdosing protocols, healing and mental health themes, etc.

I volunteered to help April improve the online presence for the 80+ episoides she had created from 2020-2023 and spent the last few weeks using a variety of fun tools to port the site, add a ton of rich meta-data, and do some SEO tuning to try to improve traffic and discovery.

I set out to enhance the visual appeal of the podcast series by creating all new episode covers/art to unify the look and feel of the series. I wanted accentuate the woman-first energy of the series and April’s vision for helping women learn and use these entheogens in their personal journeys. Using a combination of Adobe’s Firefly and Midjourney tools (AI image synthesis from text prompts) I iterated through several hundred prompts to generate really fun concept art.

airtable interface ui showing metadata we gathered for the project
Airtable was a fantastic tool to use as a collaborative space to gather meta-data for the podcast episodes and related art/episodes and URLs for Spotify and Youtube. I had been invited to use airtable dozens of times and always thought “what the heck, another collaboration app”–wow, so impressed–one of my new favorite tools! Will be using a lot in the future to collaborate and gather info and build project consensus with colleagues.

I primed the AI with a lot of variations to try to get more diverse and inclusive images out of the system. Midjourney was particularly problematic in giving me skinny, white, unhealthy looking “model” women that looked like they were straight out of a fashion shoot. To get more average/normal looking women of different ethnic/age/body types was a lot harder than it should be! I used yellow/green for Cannabis articles, purple/orange for Ketamine, and pink/red/blue for Shrooms. This color segmentation gave me groups of images that look great together on the landing pages.

screenshot of the High Guide podcast website showing images generated using AI for Ketamine podcast episodes
Notice the AI images for Ketamine themed podcast episodes use purple and orange themes. Adding words like “middle-aged”, “curvy” or “plump”, and lots of “in deep thought, reflecting on an important memory, processing therapeutic thoughts in a therapy session” allowed me to get images that felt on brand to The High Guide’s podcast content — women discussing and sharing their experience with Psychedelics in their own mental health and wellness journeys.
screenshot of the High Guide podcast website showing images generated using AI for Shroom podcast episodes
For the series of articles April has written on different psychedelic mushroom strains I generated variations on a illustrative theme that suggest mushrooms. This gave the group of articles a similar look/feel that works in the gallery control that groups these articles together. For each individual article the photography and more accurate depictions of each strain is important–which is within the body text on the page.
Screen shot of adobe's Firefly AI Image generation tool
Adobe’s Firefly AI tool was a great creative playground to generate on brand images for the podcast of women in “therapeutic introspection” or “reflecting on important thoughts”.
Screen shot of Midjourney's AI Image generation tool within Discord interface
Midjourney was another tool I used to create Podcast cover images for The High Guide podcast website. Midjourney’s UI was still chatbot based inside of Discord in Jan 2024 when I made the first batch of images–this is a HORRIBLE interface, i believe they are working on putting a proper UX/UI on the front end, more akin to Adobe’s already shipping Firefly. I think Adobe is super well positioned to get this right and to be the leading tool for creatives. 

I had a lot of fun porting the website from Squarespace over to a Elementor hosted WordPress installation. We used Airtable to organize the meta-data for the 80+ episodes, a WordPress plug-in called AirWPSync to sync fields from Airtable into ACF custom fields in WordPress, and Elementor’s data-binding to connect template blocks to the fields. The result is the site is now database rendered and we can add podcast episodes and manage all episode pages, from a single template. Yay.

This is the new podcast episode template, which uses new fields for SEO optimized titles and excerpt blocks (which i batch generated with ChatGPT), and AI generated images for featured graphics which i used midjourney and firefly (adobe) ai image generation.

I’ll now keep an eye on how Google crawls the new content to see if we can improve organic discovery to this rich content. Super enjoyed this project and excited to update this post as I have more results to share.

Categories
Projects@Work Voodle

What was Voodle?

Voodle was a tech startup that built a short-video messaging app that launched in 2020 and shuttered in 2022. The initial idea was for a mobile-first “async short video” app that would be “tiktok for work” for sales and marketing teams to talk to each other. First versions launched in summer of 2020 during COVID pandemic conditions, and while 10k+ users tested the app with their teams, the rise and dominance of team messaging activities within Microsoft Teams, Slack, and Google Meet platforms proved too high of a friction for any meaningful adoption of 3rd party apps in this era of the industry. Meaningful integration of Voodle within other apps proved difficult, as the APIs for rich video playback was minimal or not available to 3rd parties.

Voodle evolved to focus on 1-to-many “me-casting” workflows–such as a sales outreach, coaching group, or other special interest space for asymmetrical chat (eg: not everyone participating making videos, rather, most users watching the videos of a main/principal maker). Email notification workflows, analytics for views/engagement, and other more traditional sales/CRM features were added.

The last phase of exploration in summer and fall of 2022 included Web3 token-gated spaces for creators to build audiences around a mixture of NFT, video, text, graphic posts.

Here’s a quick demo from fall 2021, and some screenshots of key UX and features.

Categories
Pixvana Projects@Work

What was Pixvana?

Pixvana was a VR Video tech startup from 2016-2019 that built a cloud virtual reality video processing, streaming, and editing software suite SPIN Studio. The company was based in Seattle WA and had traction with large media companies that used its platform to build consumer facing media streaming apps. As the 2015-2018 VR market cycle crashed (Microsoft and Google canceled their consumer headset plans, Meta/Oculus adoption faltered) and consumer VR adoption failed to breakthrough to meaningful usage, Pixvana built enterprise training tools. Ultimately the VR market proved “too-early” and development of Pixvana was shuttered in late 2019.

Pixvana SPIN Studio had comprehensive features to process raw VR video camera files and prepare them for very high quality streaming to headsets at 8k+ resolutions. The app was capable of massive parallel rendering with cloud GPU instances, so that a task that might require 10hrs to render on a single workstation class PC, could be distributed to 100+ nodes and rendered in just minutes.

Some of the core features are shown below, for posterity.

SPIN Play was the headset playback app available in the many VR app stores (Windows, Oculus, Google, iOS, etc.) that could be programmed/skinned with playlists of videos and interactive programs developed using SPIN Studio. The app could be synced over-the-wire and then run in offline mode, which allowed for very efficient management of fleets of headsets. If you had 50 headsets that you wanted to prepare for an event or trade-show, for example, you could prepare content and deploy/update on the fleet, using SPIN Studio and SPIN Play.
Pixvana SPIN Studio included both 180 degree and 360 degree camera “stitching”, wherein multiple video files from camera-rigs could be uploaded and “solved” to formats ready for streaming to VR headsets.
Parallel processing in the cloud was achieved by “sharding” jobs to multiple rendering nodes. Here dozens of clips are being rendered on 100s of individual GPU and CPU nodes in AWS cloud. Rendering this same set of clips on a high-end workstation would take 100x time. This sort of “cloud-first” approach to manipulating large media files was novel for its time, and remains a yet-to-come technology for video processing in 2023.
Getting VR video onto headsets was a complex mess and many startups built video-players with varying approaches to “theater-mode” — a way to organize, deliver, and control playback on VR headsets to controlled groups of viewers (such as for training curriculums). Pixvana SPIN Studio had many features to target individual headsets with specific content and playlists, to gather analytics of how that content was viewed, and to allow for a proctor/guide to set-up group viewing–a requirement for enterprise applications such as training in VR.
Pixvana SPIN Studio’s most innovative and exciting features were it’s in-headset video editing capabilities. Tools for trimming, sequencing, and adding interactive graphics/text to VR video programs were layered on the cloud administration of files and interactive files. Users could put on a headset, edit while in VR viewing the content at high quality, then immediately publish/share to other headsets–since all of the data was in the cloud at all times.
Categories
Pixvana Projects@Work Voodle

What happened to Pixvana / Voodle?

In fall 2015 we made an ”emerging tech” bet on VR and chose a “swing for the fence” scale risk-reward approach.  We believed VR would rapidly emerge as a very large-scale industry based on anecdotal buzz and our own profound amazement at early trials of the 6-DOF systems floating about Seattle via Valve’s early-access demonstrations. 

I’ve been a founder of several businesses and by my count worked on ~15 v1.0 software products at both startups and large co’s. Pixvana’s SPIN Studio platform far and away exceeded anything else I’ve ever been involved with in terms of system design, technical innovation, and the potential to be of large commercial consequence for decades.  Alas, the work also scores as the most catastrophically irrelevant (measured by adoption by end-users we achieved) of my career.

Voodle by comparison was a practical, pragmatic application that required very little technical innovation or real change in users’ expectations, but it did come on the scene at a time of “app saturation” when we were welcomed by a market with quite a bit of app-adoption-friction.  We executed well-enough, but failed to find product-market-fit.

Over the last 7 years our approach evolved and ultimately meandered as we shipped a series of interesting tools that scored as not-quite-right for customers.  We started with large media companies and followed with makers; pivoted to enterprise learning orgs, to individuals on teams, and ended up in last efforts with “one-to-many” affinity communities.  From VR, to mobile selfie video-messaging and of late to web3 and utility for NFTs in community.

All of us that worked on the projects are incredibly disappointed.  Hard work, good execution, dogged perseverance – these are table-stakes.  Timing and luck are also brutally critical ingredients.  We aspired to delight customers. We didn’t.  I’m chagrined that we pursued such a wide set of interesting technologies in search of problems to solve—a cardinal sin.

To our shareholders and advisors  Thank You  for your support of me and the team with your trust, mentorship, and capital.  To my colleagues, we did a lot of great work and I know we all take our experience together forward into new chapters to come in our lives.

—  Forest Key, Dec 2022

The last 7 years touched the lives of many team members who worked together. For many Pixvana + Voodle were a first job right out of college, and for a few it was their formal job before retirement. From an office in Seattle, we evolved into a remote team in 8 states in our pajamas. We collaborated with passion, and experienced disappointments and achievements.

Categories
Voodle

My thoughts on Future of @work Team Collaboration

I spent a few hours answering some great questions for a blog post that i wanted to point to.

the title is: The Future of Communication Technology: Forest Key of Voodle On How Their Technological Innovation Will Shake How We Connect and Communicate With Each Other

Here’s the article of Forest Key thoughts on the Future of Communication Technology.

Categories
Pixvana

Me on the What Fuels You Podcast

We have been working with an awesome talent search firm called Fuel Talent and CEO Shauna Swerland reached out to me re: her podcast series What Fuels You. I have recently been listening to a ton of audio books on Audible, and have been getting into thematic podcasts at bedtime and on drive-time… so i dove in enthusiastically and really enjoyed the chat.

Here’s my appearance, Forest Key on the podcast What Fuels You in February of 2021.

Categories
Projects@Work Voodle

On Building a Diverse Team

I just finished reading an amazing sci-fi/fantasy series called the Broken Earth trilogy from N.K. Jemisin and it has really inspired my thinking about hiring various diverse product management roles at Voodle. I came to this lovely book series by sheer “sneak wave accident”–let me explain.

Sneaker-wave forced me out of my comfort zone

Over holiday break i was reading Cien Años de Soledad by Gabriel Garcia Marquez in all of its native Spanish-language-delight (what a masterpiece), and between chapters placed the book down on the beach while i stood to stretch. Out of the blue and after more than 3 hours in that spot, a *much* larger wave came ashore and submerged all of my stuff, book included which was a sopping mess and “not readable” until a good week of drying out.

Taking in Gabriel Garcia-Marquez’s masterwork in the original Spanish, languidly, while lounging on a beach on the Kona/Hawaii coast… minutes before a sneaker-wave appeared…

I had a few other books already on order a few days out on Amazon’s planes/trucks (If Then by Jill Lepore, Player Piano by Kurt Vonnegut)–but up-next for my spouse and at my arms reach on the beach was book one of the series: The Fifth Season. I haven’t read a fantasy fiction book since The Belgariad series when i was 16! I had absolutely zero curiosity or intention to enjoy beyond passing a few moments while i mourned my soaking-wet book-of-intent…. but 1300 pages later i have to say I really, really enjoyed.

Diving into the Fifth Season, the first book in the Broken Earth trilogy from N.K Jemisin. Book two the Obelisk Gate, and book three The Stone Sky, complete the series.

Diverse POVs = Fresh Ideas & Empathy

Just a few chapters into the series I sensed the diverse voice of a non-white-male at the helm, which was instantly exhilarating and “new”, creating an experience completely unique from the “fantasy” tropes of so many other worlds (Tolkien, R.Martin, etc.):

  • Jemisin is an African-American woman and nearly all of her story’s characters are either female or gender-non-conforming. This emphasis feels as dramatically natural as Tolkien’s entirely male universe–but obviously stands in ironic contrast. I found myself thinking between chapters of what might 2000 years of western canon literature have told via entirely female voice/characters?
  • She alternates between a 3rd and 1st person narrative voice from multiple characters–including using a 1st person voice that speaks to the reader as “you” (I don’t think i’ve ever read anything like this before, is there precedent in literature? There must be? but it was new to me!) And without giving away anything, there are multiple-folding-twists throughout the series that connect and pivot and transform the reader’s understanding of the narrator–clever shit! Very, clever.
  • The richness of The Stillness world is elaborate in historical details but always in vital ways that drive the story and characters; i really felt like i was living in the world she created, while 3-4 seasons of hopefully fantastic television-writing-worth of drama was taking place. I was exhausted and relieved after powering through the series… just reading this much creative thinking was exhausting, i can not imagine CREATING this world!

The unique narrative POVs that she brings to her writing created for me a more profound sense of empathy and connection to the characters. Hearing/seeing different types of characters made me feel for them, and identify with them, even in ways in which as a cis-gender-white-male I have never really felt connected to “lords” and “magicians” and “elves”, all of whom were created in the image and spirit of their white-male authors?

Diversity in Software Product Management Teams?

… which had be thinking about my continuing understanding of just how important diversity of POV and experience is in forming high performing teams in all walks-of-life, including in my industry building software.

We started voodle in late 2019, before the worldwide COVID pandemic had accelerated what we already felt was coming–a dramatic disruption in how people work and collaborate using mobile devices and asynchronous short-video.  We spent 2020 organizing ourselves as a fully remote team, shipping our mvp app on web/ios/android, and listening to thousands of users and their early-product feedback. 

Whenever we open ourselves up to diverse POVs, we are better. Incorporating diverse POVs, leads us to better solutions to problems, to greater relevance with our diverse customers, and ultimately to success in all of our goal metrics.

We are searching for product experience team members that have a passion for our mission and demonstrated excellence in skill areas related to product management and user experience craft.  We are assembling our team with a mix of diverse background experiences and prior domain expertise, aiming for a wide-ranging point-of-view.  We are not looking narrowly for a specific candidate, rather, we are recruiting a TEAM.

This “matrix” is my revamped idea of the WIDE gamut of talent that might be a good fit for us to build out our Product Management team/capability at Voodle, in Jan 2021. This “aha” moment (where i realized we needed to change the nature of our job search, to get to a more diverse set of candidates for the role(s)) came as a direct result of my reading N.K. Jemisin – THANK YOU N.K!

This led me today monday to spend the day tearing apart a somewhat narrowly crafted JD we have been recruiting against for 6 months, for a proverbial “chief product officer”. I’ve been really unhappy with the lack of diversity in the candidate pool, and the search has yet to yield a hire. I think our search was too narrow. The Broken Earth trilogy directly inspired me to break apart the JD into a new “matrixed” search which more broadly seeks talent across early-to-late career continuum of product managers.

We are recruiting TEAM MEMBERS.  If you are drawn to our mission and think you could add value as the CPO, or as a college hire IC — or anything in between!… we’d love to hear from you.

Categories
Voodle

Voodle Concepts

We are off and running adding content to explain Voodle, and our beta testing is underway. So, excited.

Here’s a blog post on our website i wrote about feature development for voodle.

Categories
Projects@Work Voodle

From VR, onwards to Voodle and voodling

In January of 2020, after nearly 4 complete years working feverishly and with great passion and focus on the virtual reality market opportunity that seemed so bright and shinny and attainable in 2015… my colleagues and I at Pixvana made the painful, but necessary decision, to shut down our product and cease all of our efforts in the XR market. It was just, not, happening. We built something great, really the best v1 product i have ever been a part of in my 30+ years building software. It was elastic, it was cloud based, it had incredible VR native interfaces to build really interest and compelling immersive content. The Quest headset is pretty darn awesome (if only that had been the v1 experience for most consumers!) and we got *great* looking 360 and 180 stereo video working on it, over the network and offline, with a great end-to-end vr video publishing platform we called SPIN Studio. But in hindsight we were 3, or maybe 10 years ahead of any real inflection point in the VR market. After a blast of interesting products and innovation from the likes of google, msft, fb, and htc/valve… by 2019 our industry found ourselves in a ever smaller and nichier market with only FB/Oculus really pretending that there was any future anytime soon… and at their last developer event it felt like even they were just pretending. Soccer mom’s were featured prominently in their product advertisements while teenagers watched from the living room… really?

A fateful business trip that I took in the fall of 2019 to China and then Germany to attend the AWE event in Munich, provided both the death-blow to Pixvana’s VR dreams, and the inspiration to start something new from the ashes, which is what we now call Voodle. In China i had the chance to talk to some of the manufacturers of VR cameras. At the time we were still actively building support for these cameras in our Pixvana SPIN Studio cloud: we were teaching our system the nomenclature for the file naming conventions, the warp and optic parameters to solve high-quality stitches to 8k+ resolution master files, the LUTs for camera exposure gamuts… but these camera manufacturers one-by-one either did not engage with me (when they had in past visits), or, were blunt and honest with me and shared that they had ceased development and manufacturing, and in some cases had sold -through their inventory. VR Cameras were not going to be a thing… which as one of my colleagues said upon hearing, “well, that’s a signal” (understatement).

Beijing in fall 2019, never looked so beautiful as the city was decked-out for the national holidays. As i took in the beautiful beihai park flowers, i contemplated the collapse of the VR camera manufacturer ecosystem that was a precursor to viability for Pixvana’s video production and publishing software pipeline we had spent 4 years building. Not, good, news, for VR video.

A week later at the AWE conference in Munich, the vast majority of interesting activity at the event was not with headsets but rather, with mobile phone screens. Screens at arms-length, and usually with the SELFIE camera as the primary viewing lens onto the world. As much as Snap and Facebook product managers raved about “over 1bn people are doing AR today with their selfie-cameras”, i couldn’t help but feel that a pivot into phone-based-AR applications using pixvana’s IP and brain-trust would be like signing up for another several years of equal or even greater disappointment.

A representative image from the sessions that caught my attention at AWE Munich 2019; Snap and FB/Instagram crowing about how important selfie-cameras are as ecosystems for mobile AR. The takeaway for me, was not an opportunity with AR… rather, with selfie-cameras. This would directly lead to thinking about Voodle.

This 1-2 combo of punches in my face finally snapped me to the realization that many other entrepreneurs had come to in 2019–AR/VR might just not be relevant right now in the world, in any appreciable business scale way. Why would Pixvana continue to spend time on the XR space? Well, after we huddled back at the office, we all agreed we couldn’t.

However, what did strike me as incredibly interesting… in a way that people of my age/generation sometimes struggle to fully comprehend, is just how much people like looking at their selfie-cameras. To take pictures and video of themselves to communicate with loved ones, all day, every day. Over the course of the next several months i started to pay much more attention to how the selfie camera was being used by my family and friends, and the ways and subjects that we communicated to each other in private networks such as Whatsapp and iMessage groups. That, is what led to the kernel of an idea that grew into Voodle.

Essentially, why is it that among my family and friends, almost all of my communications are image, video, and emoji or meme based:

This is what conversations with (left-to-right) a dinner date plan with my wife, a banter about puzzles with my mother, a group chat with friends about a trip to Japan, and a critical 49ers game-day discussion with my dad, look like on my phone. Emojis, memes, photos, and videos. Very, very little text.

Yet when i’m at work, i’m 99% grounded in text. Long TL;DR text in email applications like gmail, back and forth short messages with an occasional URL or small jpeg thumbnail in Slack or Teams, and sometimes sharing documents such as in Highspot or Onedrive or Google Drive. This schism is plainly evident when looking at my phone screen; if it is text, i’m working, if it is friends and family, it is photos/videos/images. This seems incorrect on many levels?

This is what my daily conversations look like at work: text, more text, more text, some documents, more text, an occasional small jpeg image/thumbnail, and more text…

So what if our work related communications looked a lot more like how i communicate with friends and family? That’s the basic kernel of an idea that led us to create voodle.

Here are my work conversations that previously would have been lots of short text, or worse, TL;DR emails, re-imagined with lots and lots of video.

A “voodle” is a short “video-doodle” that can be posted and shared among work colleagues. These may be insights into customers, competitors, operations, morale and culture insights… we are going to figure out together with our beta-testers. But what we can already feel with our early tests, is that it is *transformational* to communicate with work colleagues in a manner more similar to that we use with friends/family already. 2 billion consumers on their cellphones can’t be wrong!

We are working on voodle! I will write more about voodle soon. I’m excited, as is the team, to share voodle and voodles and voodle pools…

Categories
Pixvana Projects@Work

I had XR-vu, and I liked it!

XR-vu“, or “VR-vu”…whichever term comes into vogue in the near-future, i want to go on record as saying that it happened to me a few weeks, ago, and I liked it–a lot!

Yes i’m playing on the word “deja-vu“, that oh so fun feeling of experiencing something and having a sense of foreboding or otherworldly prescience, as though you’ve previously dreamed of the moment or even lived the moment, in a different state of consciousness? Well play with me for a moment–take that feeling, and now imagine what that feels like when it arises because you HAVE experienced the moment before… but in Virtual Reality or another form of XR (extended reality)?

Me wearing a VR headset in the middle of the street, illustrating just how wild and crazy VR experiences can feel? Or, posing for a photo-shoot we did at Pixvana so that we had interesting pictures of people in VR headsets, for blog posts like this, illustrating our experiences in VR! Actually, a very good image that conveys my astonishment at feeling XR-vu for the first time.

That’s what i experienced a few weeks ago when I visited Ollantaytambo Peru, a lovely Andean village about 2 hrs outside of Cusco, the former empire capital of the Incas. I had been to the region before, about 30 years ago when i was backpacking for 18 months after college. However, i had never been to Ollantaytambo’s ruins–not in person. But I did visit Ollantaytambo in Virtual Reality, in a detailed, compelling experience that was built by Microsoft as an example of how tourism and travel might be conveyed using VR. It shipped as Microsoft HoloTour, a demonstration app that launched in 2017. This technical document describes what the team did to build the Holotour experience of Ollantaytambo–quite interesting mix of techniques to photographically capture and convey the site.

Unfortunately i couldn’t find any images to illustrate the experience in the headset–suffice to say that in Holotour, i experienced standing in the midst of the Ollataytambo ruins… and when i visited these dame ruins in April of 2019, i had a triple-take moment that flooded my brain with a sense of *very* strong “deja-vu” like cues. Have i been here before? Why does this place seem so familiar? Did I dream it?

Here I am in Ollantaytambo’s Inca ruins, marveling at the beauty of the region, and the astonishing stone-work that pervades Inca sites.
This rock formation is what strongly triggered my sense of XR-vu, as it was prominently featured in the Microsoft Holotour visit to the same site.

No, i had never been here. But yes, i was here in Virtual Reality! Wow. WOW. It was all the fun of deja-vu, times at least 5x… or maybe 10x. It really showed me the difference between seeing a picture or a movie, and having been immersed and felt the unique compelling experience of *presence* that is the hallmark of XR/VR, which triggers activity in the human brain that forms actual spatial *memories* that i was then recollecting/remembering, as though they were real. I don’t know if this feeling would always be as strong, say, if i had felt this sensation many times before? But it was incredibly interesting, and i wanted to first at the podium to share it and i look forward to writing about it more and discussing it with others as they have XR-vu of their own!

Anyone else experience XR-vu of VR-vu?

A wider view of the amazing, beautiful Inca site at Ollantaytambo.
Categories
Projects@Work

Masters of Visual Effects 1999 Style

Too fun.  So in 1999 some buddies and I put together a series of instructional video tapes (that we shipped out on VHS) called the Masters of Visual Effects series.  The series had originally intended to have some true masters of visual effects, eg: Scott Squires, John Knoll, Eric Chauvin… real veteran / gurus of the industry.

Unfortunately we only got chapters 1 and 2 produced and we ran into some production $$  overruns, and long story short we never got the real masters in front of the camera.  What we do have in this historical record, thanks to a remnant VHS copy that was found and digitize by my buddy Matt Silverman, is a time-capsule of vfx and post-production issues from 1999, immortalized by the presenters.  I may get take-down requests from some of them so i will leave their names out of the meta-text here, and submit, humbly for your viewing pleasure.

It is interesting to me now in 2016 how many of these issues from 1999 are becoming issues again in the age of VR video production.  Post-production has become relatively effortless in 2016, with basic laptops easily being able to handle UHD 4k video editing and effects.  However, doing full immersive VR content requires some of the same proxy-resolution workflows that we employed in 1999 to deal with the film-video-digital steps of that era.  Everything old is new again!

Masters of Visual Effects – 1.1 – Introduction

Masters of Visual Effects – 1.2 – Film as Digital Images

Masters of Visual Effects – 1.3 – Post Production Basics

Masters of Visual Effects – 1.4 – Pre-Viz and Editing

Masters of Visual Effects – 2.1 – Compositing Concepts Part 1

Masters of Visual Effects – 2.2 – Compositing Concepts Part 2

Masters of Visual Effects – 2.3 – Keying

Masters of Visual Effects – 2.4 – Tracking

Masters of Visual Effects – 2.5 – Paint

Masters of Visual Effects – 2.6 – Rotoscoping

 

 

 

 

Categories
Pixvana Projects@Work

Field of View Adaptive Streaming for VR

Kudos to Aaron Rhodes and Sean Safreed for the first of many Pixvana videos that outline some of the unique challenges, and solutions, to making great stories and experiences using video in Virtual Reality.  This video tackles the unique challenges to working with *really* big video files, on relatively under powered devices and networks.  This general approach is something that we think of as “field of view adaptive streaming”, in that unlike traditional adaptive streaming where multiple files are used on the server/cdn to make sure that at any given time, a good video stream is available to the client device… in VR we have to tackle the additional complexity of *where* a viewer is looking within that video.  The notion of using “viewports” to break up the stream/video into many smaller, highly optimized for a given FOV, videos, is something we are firing away on at the office these days.

So, should we call this FOVAS for short, for Field of View Adaptive Streaming.  ?  It is kind of weird, but it makes a lot of sense… i’m using the term regularly, maybe it will stick!

Here’s the video:

Categories
Pixvana

Adaptive 360 VR Video Streaming

We’re having a lot of fun at the Pixvana working on various VR storytelling technologies, what we have termed “XR Storytelling” as we are thinking broadly about both AR and VR but also xR, such as virtual reality caves, and other as yet to be conceived of immersive platforms which will require similar tools and platforms.  One of the key challenges we are working on is how to deliver absolutely gorgeous/high-quality adaptive streaming 360 VR video.

Last week we combined our love for food with our love for VR, and shot a rough blocking short film that we intend to turn into a higher quality production in a few more weeks, when we can bring a higher quality camera rig into the mix.  Aaron blocked out the shots while the team at Manolin, the f-ing awesome restaurant next to our office, was prepping for the day.  Here is the rough cut:

Then, we threw it into our cloud elastic compute system on AWS and produced several variations as a series of “viewports” which when viewed on a VR headset like the HTC Vive (the best on the market so far) produces some pretty darn immersive/awesome video at a comfortable streaming bandwidth that can delivered on demand to both desktop and mobile VR rigs.  Here’s a preview of what the cumulative render “viewports” look like in one configuration of the settings (we are working on dozens of variations using this technique, so we can optimize the quality:bandwidth bar on a per-video basis):

Looking forward to sharing more of what we are up to with the public in the near future–for now, if you are a seattle friend, stop by for a demo, and, delicious dinner at Manolin Restaurant!

 

Categories
Pixvana Projects@Work

Clear Example of VR Video Assembly

Here’s some really clear images and videos that illustrate a VR Video assembly process using a 6 camera go-pro rig.  This isn’t meant as a comprehensive how-to, rather, just a visual only guide that I will be using in presentations to walk folks through the process.

Step 1: The Rig (6 gopro cameras)

camera

Step 2 – Shoot

shoot

Step 3 – Raw Footage

Step 4 – Exploded View

Step 5 – Equarectangular Stich (rough)

Step 6 – Spherical Playback

Categories
Pixvana

Why VR Video Will Be BIG

wevr theBlu.jpg

A lot of my friends have asked me why i’ve plunged into starting a new company, and, why / how i chose building a VR Video Platform specifically as an area for software innovation?  I think i can succinctly summarize as: VR Video is *magical*, and things that are truly *magic* are f8cking cool and rarer than unicorns.  I see a unique confluence in time for me, my skills, my passions, and a market need and opportunity.  It’s only been about 90 days since I put on my first vintage 2015 VR headset (like many i had tried the 1990s era stuff which just made me vomit), and my Pixvana Co-Founders and I gave birth to our VR Video startup Pixvana this week.

Here’s why:

When i put on a HTC Vive headset for the first time and experienced the demos Valve has been showing in summer of 2015, i experienced a profound, complete, pervasive feeling of what I knew immediately to be what the VR industry calls “presence”.  The sensation was right there with other must-try-in-a-lifetime, hard-to-describe-to-someone-who-hasn’t-done-it-yet experiences: falling-in-love, skydiving, scuba, sex, certain recreational mind-expanding drugs, finishing a marathon, watching my wife give birth to our boys…  Specifically, for me, I experienced a sense of outer-body time and space travel: time stopped functioning on the normal scale of my daily routines, my body perception was replaced with something “virtual” that was not quite real but not quite fake either, and i was taken to far away imagined worlds–underwater, into robot labs and toy tables and several other places that while not photo-real in their rendering, felt and behaved in ways that were significantly real enough that it WAS REAL.

wevr theBlu.jpg
WEVR’s theBlu, often the first moment of real “presence” experienced by those that have tried the HTC Vive in 2015–it was for me!

When i took the goggles off after that first experience, it took me a good 3-5 minutes to “come back”–just like landing in Europe after a long flight and sensing the Parisian airport as different than my home city departure equivalent, coming back from the virtual world took me a moment of reflection and introspection to balance the “wait a minute, where am i now”?  It made me think of existentialism and some of my favorite Jorge Luis Borges short stories–my mind immediately considered “wait, am i still in VR and i am just perceiving another layer of possible reality, waiting to take off another set of goggles within goggles?”  This wasn’t a scary thought or psychotic split, rather, a marvel at the illusion that i had just witnessed, like a great card trick from a magician–only it was my own mind that had played the trick on me…

Lu with VR Headset-1000 small
The smile on my friend Lu’s face perfectly captures her “aha moment” of first-time-presence.  I’ve seen dozens of friends light up this way during their first time VR trials.

In addition to the Steam VR experience (HTC Vive is just one hardware implementation, what I was really marveling at was Valve’s SteamVR vision and software–not the hardware form factor) in the last few months I’ve tried most of the other mainstream 2016 expected delivery VR experiences: Oculus Rift, Samsung Gear, Playstation’s VR, and a variety of configurations of Google Cardboard and various phones.  In terms of delivering “presence”, without a doubt the Vive is on a completely different level–i’d rate it a 10 on a scale of 1-10, the DK2 Rift and Sony VR a 5, Samsung Gear a 3, and Google Cardboard a -5 (i’ll write more in detail about Cardboard in the future–suffice to say it is antithetical to creating any sense of presence, and it does VR an injustice to have so many of them floating around out there, suggesting a inferior experience is to be expected to all the unknowing consumers who have tried it and think they have seen what is coming in VR).  But these distinctions between hardware systems this early in the market is really inconsequential.  I believe that just like with mobile devices or PCs, within 5 years the hardware will become pretty uniform and indistinct (is there really any difference at all between a iPhone 6 and a Samsung Galaxy 6?), and the real business and consumer differentiation will be in the software ecosystems within the app stores and developer communities that will rise, as well as in the software applications that will be fantastic but will run cross-platform on all of these devices.

Andrew Little 1
Andrew in disbelief, watching a VR video that made him forget he was sitting in my living room.

So for that reason, i’m much more interested in the content and software enablement systems that need to be built to enable creators to build cool shit that will be compelling and magical for consumers.  The more magic experienced, the more VR consumption and headsets will be sold, and a virtuous business cycle of new content, demand for that content, more content creators, repeat….

It is clear to me that there are two (2) canonical types of content for these devices–3d CGI environments, and video/still image photography based content.  3D CGI material is very attractive and inherently magical, as it can fully render images that track the users head movement side to side and even at “full room scale” if she walks around and freely explores the environment.  A pretty mediocre piece of VR content in 3d CGI on the Vive is pretty darn amazing. A great piece of CGI VR is astoundingly cool (eg: WEVR’s theBlu Experience.

verse u2
Chris Milk’s U2 VR Video is a glimpse of VR video specific semantics that are just now being worked out–both creatively and from a technology perspective.

On the other hand, even a really great VR video can be pretty darn “meh” on any of the VR headsets, and pretty darn awful and nausea producing on a bad VR headset (‘wassup Google Cardboard!).  But it won’t be that way for long–this is more a reflection of the nascent state of VR video than of a fundamental problem with the medium.  VR Video Content and the technology to shoot, prepare, fluff, and deliver for playback of VR video will follow a rapid improvement cycle just like other new film mediums have enjoyed.  Consider:

In the late 1890s when motion pictures were being introduced, Vaudeville was the mainstream performance art form and most early cinema consisted of “filmed vaudeville”.  Within 20 years, unique storytelling technology and production and editing techniques were introduced with films such as the Great Train Robbery, and various intercutting techniques between very different camera compositions (wide shots, close ups, tracking shots, etc.) started to tell stories in ways that bore no resemblance at all to vaudeville’s tropes.  This transition from Vaudeville-to-cinema was ~1900-1950 phenomena which included the addition of audio in the 20s and color in the 40s and large format wide aspect ratio spectaculars like VistaVision and Cinerama in the 1950s.

great train robery
1903 film The Great Train Robbery used a myriad of new techniques in composition and editing, which must have been initially disorienting in their novelty and break from more traditional Vaudeville “sitting in an audience” perspectives that viewers would have been accustomed with.

Television came next and introduced live broadcasting and recorded programs which were stored on tapes in both professional (and later) for consumer distribution on VHS/Beta.  Editing was done as “tape-to-tape” transfer, cumbersome and time consuming and actually slower than just cutting film pieces together on a Moviola.

great train robery
Thankfully i came into the film industry just as digital film making tools were obsoleting devices like this.  I’m sure it was just a joy to handle all that film by hand and make splices with razor blades and cuts with glue and tape… NOT!

In the 1990s when i worked at Industrial Light and Magic, the first digital effects and digital post-production projects were just being introduced.  When Jurassic Park was made in 1993 there were less than 30 digital effects shots with CGI creatures, but 5 years later there were films being made with 1000s of shots and some that were color graded digitally and thus 100% processed through computers.  In that same timeframe non-linear editing tools like the Avid made it so much quicker and time efficient to edit, that editors started to cut films in a whole new style that was much more rapid and varied–it is incredible to watch a sampling of films from the 1985-92 period, and compare them to those from 1996-2000.  My teenage sons see the earlier films as i might see a 1922 film pre-sound/color.  The analog-to-digital-cinema production transition was perhaps a 1990-2009 transition that started and ended with James Cameron films (The Abyss was the start, and Avatar as the culmination in its perfection of blending digital and analog content seamlessly).

great train robery
Web video infrastructure enjoyed rapid innovation and disruption, from crappy low-resolution thumbnails in 2000, to pretty darn awesome 4k with robust streaming by 2010.

In the 2000s the web was the big disruptor, and technologies like Quicktime, Flash, Silverlight, Windows Media, and the enabling web infrastructure have pushed televisions which were once broadcast reception devices, into on-demand streaming playback screens for web-content and DVR playback.  My household is now dominated by Youtube (which consumes my teenagers free time at all hours of the day on their phones) and Netflix and HBO GO (which dominate my wife and my evenings).  Early web-video was mostly inconceivably small and crappy looking, but by 2010 was of the highest quality and matched master recordings in resolution and fidelity.

friends with VR Goggles On
I’ve given VR Video demos to ~70 folks so far; it has been fascinating to see and hear people’s reactions.

Which brings me to VR Video.  It is clear to me that VR Video will disrupt other forms of video consumption and viewing in a similar manner, and following the trend of other media tech adoption, will do so in a much shorter time frame.  There is so much to do, so much to build, so many creative problems to solve.  I’ll write more about that soon–but for my friends that have asked, now you know the context for my excitement about VR Video.

Forest Pixvana 2015.jpg
Forest Key with a “VR Video is going to be frickin awesome” grin sitting on the steps of Pixvana’s new office in Fremont neighborhood of Seattle.
Categories
buuteeq

Trotamundo wins Perk of the Year 2014

I’m incredibly proud and happy to have won the Geekwire Awards Perk of the Year 2014 for buuteeq’s employee travel stipend program, “Trotamundo”.  I started buuteeq because of my deep passion for travel and seeing the world.  We created the Trotamundo program because we wanted our company culture to embrace and amplify the experience of travel.  Travel exposes us to diversity of human experience, inspires us, and ultimately transforms our world view in a way that also makes our company stronger and more nimble in our quest to revolutionize the hotel industry.

Here’s the award ceremony on youtube:

And some photos:

Geekwire Awards 2014 4 Perk of the Year Geekwire Forest Geekwire Wide Shot 2 Geekwire Wide Shot