Last week, the San Francisco based Audioshake raised $2M in funding, which is not unusual yet at the top of the band for a baby startup. I see audio, and I am immediately interested because of this and because I am curious: is there any new technology to make audio even more monetizable? Audioshake developed an algorithm that breaks a song down into its component parts: vocals🎤, drums🥁, guitar🎸, and other bits; and also help creators recombine these. Their promise for artists: they will make more money out of their songs if they can offer it as a whole + as its individual components (where these sound 🎶good I guess).
💡So now I am thinking how cool would this be for textbooks and online courses; an easier and faster way to breakdown and remix the content in new ways.
Well, look no further than:
🧐StudySmarter: students can use this app for free to convert text into interactive content, i.e. flashcards and notes that are then organized into a study plan to help with retention and recall. It raised $16M this year.
📝Scribe: lecturers could have this record their movement and clicks on their computers and automatically convert it into step-by-step guides and instructions for their students; mega-useful for any classes that make intensive use of software, coding or data analysis. Scribe raised $30M. This is mindblowing, I can see all computational and quant social scientists using it. Imagine how easy it will be to update lab materials; uh oh and how about library guides for using all the different databases? 🤯🤯🤯
What else is going on?
🧠CourseTech for corporates are raising enormous amounts of money. Paris-based 360Learning got $200M to develop business executives through collaborative learning; they also have this feature that enables businesses to create and adapt the course content to any new situations, new products, new case studies, you name it! SoftBank is among the backers.
😎Praxis Labs raised just over $15M for their immersive learning experiences focusing on inclusion in the workplace. Embodiment is definitely the way to go here, I wrote about the extensive research that already shows its benefits. Insideout Reality has a similar offer.
💣NVIDIA and Microsoft build a competitor to GPT-3 called the Megatron-Turing Natural Language Model with 530 billion parameters. GPT-3 has 175 billion. So Microsoft has their hands in both of the largest language models. And nobody’s talking about the environmental damage this is having. At some point I heard estimates that to train an NLP algorithm with 300 million parameters would need the same amount of electricity used to power a US household for 50 days. That is why there was a trend to reduce not increase the number of parameters.
🤑Harvard Management Company made over 30% on its endowment lat year, increasing the value to $53 billion, mostly because of the amounts they invest in public and private equity. Harvard has the largest endowment among universitites in the US; and if it were a hedge fund, it would be the 7th largest in the world. Other schools had similar soaring returns:
See you next week! 👩🏻💻