It's time to create a promo video for your company. You’ve shot the footage, it’s been edited together, and now all that’s missing is the music. You know what you’re after - something that sounds a bit like Bonobo, but with a bit of a Daft Punk twist. Problem is, you don’t want to pay the extortionate royalties for either artist, and there’s only so much scouring the internet you can do for free tracks before you realise that they’re all pretty poor quality and don't fit your video.
Jukedeck, a London based startup, are working to overcome this problem and have built a machine learning driven product that can compose original music to give companies (and individuals) personalised music that’s ‘dynamically shaped to their needs’.
The team are comprised of composers, producers, engineers, academics and machine learning experts, all with a passion for music and technology. They saw a problem, and they’ve built a platform to overcome it.
Ed Newton-Rex, founder and CEO of Jukedeck spent his childhood studying, performing and composing music. We were especially interested to hear how Ed made the transition from performance into AI, and he told us that when visiting his girlfriend in Harvard after university, he went along to one of her lecturers on computer science and came away from it thinking that coding ‘might not be so insurmountable after all, and decided to learn to code and start trying to build an AI composition system.’ This interest came from the question of what ‘good’ music really is, and the related question of why computers hadn’t previously been able to compose good music.
Ed will be speaking at the Deep Learning Summit in London on September 21 & 22. Early Bird passes are available until July 28th. Register now to guarantee your place.
Jukedeck has been named one of WIRED’s Hottest European Startups and has won a number of startup competitions, including the Startup Battlefield at TechCrunch Disrupt and LeWeb in Paris, as well as a Cannes Innovation Lion. Interested to hear more about their success, Ed answered some of the questions we had ahead of his presentation in London.
We're working on artificially intelligent music composition: we're building an AI composer that uses neural networks to compose original music. We’re a team of 20 people working on this out of London (and we’re all musicians). We’re focused on improving AI’s ability to understand music at the composition level, training it to be able to create novel chord sequences and melodies.
To date, we’ve used this to give video creators the ability to create unique, royalty-free soundtracks for videos. Now, though, we’re working on giving lots more people access to our technology, so that it can be used across a range of verticals.
We’re really doing this for two reasons: to democratise and personalise music creation. We think AI can democratise music creation by letting large numbers of non-musicians start creating music, without years worth of musical experience behind them; and we think it can personalise music creation in that, once AI understands how to compose music, it can compose music on the fly, specifically for the situation you’re in.
Absolutely - AI is already affecting the music industry in big ways. It’s being used to amazing effect, for example, by Spotify and others in music recommendation: streaming services can now figure out music you haven’t heard but will probably like incredibly effectively.
I think the biggest change of the next few years, though, will be the effect of AI on music creation: not just in composition, but in production, performance and even inspiration. AI will learn to perform many music production tasks that have traditionally required human expertise, and this enable lots more musicians to have their music professionally-produced. AI is already starting to learn how to perform music expressively, and this trend will continue. And AI will be used by composers as a source of inspiration, giving them musical ideas they can then build on. In short, AI will become part of the creative process, a tool that musicians can use to collaborate with in order to help them make their own music.
Ed wrote more about this for Tech City News, and you can check it out here.
There are so many that it’s hard to figure out what the main challenges are! However, there are certainly things that are difficult. In particular, a big limitation on AI composition at the moment is the fact that AI has only a fraction of the training data that humans have access to: AI is trained solely on musical training data, whereas, when human composers compose, they draw on a wealth of other experiences, such as relationships, emotions and memories. All of this means that, for now, AI isn’t composing music that matches up emotionally with what humans are capable of creating.
| Keen to hear more? Alongside Jukedeck we will hear from the likes of DeepMind, Facebook, Deep Instinct, alpha-i, OpenAI and many more.|
Last chance to register for Early Bird passes for the Deep Learning Summit London this September. Offer ends July 28th.
22 November 2017, London
Leading minds in healthcare and machine intelligence will come together for an evening of networking and keynote presentations around tools & techniques set to revolutionise healthcare applications, medicine & diagnostics. Join us for a three course meal to support and showcase women in Healthcare and Machine Intelligence.
23 January 2018, San Francisco
Leading minds in machine intelligence will come together for an evening of networking and keynote presentations. Join us for a three course meal to support women in AI and Machine Intelligence.
25 January 2018, San Francisco
The next generation in predictive intelligence. Anticipating user & business needs to alert & advise logical steps to increase efficiency. The summit will showcase the opportunities of advancing trends in AI Assistants & their impact on business & society. What impact will predictive intelligence have on business efficiency & personal organization?