The latest industry news to your inbox.


I'd like to hear about marketing opportunities


I accept IQ Magazine's Terms and Conditions and Privacy Policy

AI-powered screens detect mask wearing at venue

A North Carolina stadium is using artificial-intelligence (AI) technology to monitor for Covid-compliant public behaviour, such as social distancing and the wearing of face coverings, among fans arriving at the venue.

The 50,500-capacity Kenan Memorial Stadium in Chapel Hill, which is primarily used for American football, has installed ‘Health Greeter Kiosks’ to encourage anyone passing to wear masks and practice social distancing. The AI – specifically machine learning and computer vision – uses real-time data from a depth-sensing camera to detect if someone is wearing a mask and whether there is proper spacing between individuals. As people walk by the screens, a large display alerts them to either correct or continue their behaviour.

The technology was developed by the University of North Carolina’s Reese Innovation Lab, with support from Lenovo North America, and first deployed for an American football match (University of North Carolina vs Virginia Tech) on 10 October. The kiosks, which were placed at locations such as entrances, bag-check queues and ticket offices, “worked as intended, tracking and encouraging safe behaviour”, according to Lenovo.

“These kiosks will help us better understand human behaviour and encourage safe behaviour”

“We needed real innovation to meet this unprecedented challenge, and pushing the limits of technology is at the core of our lab’s mission,” says Steven King, chief innovation officer of Reese Innovation Lab. “Engineering a technological response to Covid-19 and event-attendance restarting is a real and rewarding challenge, [and] I’m grateful for the support of UNC-Chapel Hill leadership, our exceptional and inventive students and Lenovo.”

The kiosks, which use fully anonymised data, with no images saved or transmitted, may help shape safety protocol and provide insight on how crowds behave during the coronavirus pandemic, adds King.

“We see this as the starting point of wider deployment, with opportunities to refine and customise the technology,” he explains. “From campus hallways to outdoor events, these kiosks will help us better understand human behaviour and encourage safe behaviour, and I’m excited to see how we evolve and adapt this AI-powered solution.”


This article forms part of IQ’s Covid-19 resource centre – a knowledge hub of essential guidance and updating resources for uncertain times.

Get more stories like this in your inbox by signing up for IQ IndexIQ’s free email digest of essential live music industry news.

2020 and Beyond: How ticketing will revolutionise the entertainment experience

You are looking to buy a ticket to an interesting event for the upcoming weekend. Instead of navigating to your browser, you ask Siri or Alexa, “What’s happening this weekend in town? What are my friends and family doing?”

Within milliseconds, your AI assistant searches the internet for the events that seem most appealing to your interests and that appear in your family and friend’s social media feeds. Your AI assistant responds asking you follow-up questions on your desired experience and budget.

Once you have found the perfect event, you give your AI assistant the go-ahead to buy the tickets. Almost immediately, your tickets are purchased, verified and readily available in your mobile wallet. This transaction was likely processed through a mobile payments solution and automatically added to your calendar. Your AI assistant asks if you would like to invite friends, because if they also attend the event, the brand offers you an incentive.

The day of the event is here. When you get within a geofenced area of the event location, you receive a notification asking if you would like an augmented-reality tour guide to assist you to your gate of entry and seats. As you approach entry to the event, your face is scanned to verify your identity and your radio-frequency identification (RFID) or mobile phone ticket is checked-in in a near frictionless entry point.

A ticket is not just a piece of paper, but the direct connection between a person and an experience

Once you enter, your phone becomes a second-screen experience, providing your choice of merchandise, food ordering, artist or athlete information, game statistics and live betting experiences. When you arrive at your seat your food and drink order is waiting for you and you settle in for a great time.

This glimpse into the near-future is closer than it might seem. All of the referenced technology already exists. The next step is bridging the gap between the intersection of the experience, technology and human behaviour.

A ticket is not just a piece of paper, but the direct connection between a person and an experience. It is also the core mechanism for how organisations will gather data to better engage with you and provide offers you will find interesting.

The smartest organisations invest not only in technology, but also commit to securing the treasure trove of data on their users. Piecing these together will be the key to continually providing users with great experiences in a world of increasing entertainment options.


Mark Miller is the co-founder and chief executive of TicketSocket, a white-label ticketing and registration service for venues and events.

Yamaha unveils first piano AI system

Yamaha Corporation has released footage of the world’s first artificial intelligence (AI) piano system, in the company’s latest foray into the world of live music AI.

The piano system, which made its debut at the Ars Electronica festival in Linz, Austria, is capable of playing any piece of music in the style of late pianist Glenn Gould. Music hologram production company Eyellusion has also expressed interest in bringing Gould back to life, in the form of a hologram tour.

At the festival, the system performed solo and a duet with pianist Francesco Tristano, accompanied by a trio of Bruckner Orchestra Linz members.

The system consists of a player piano and the AI software, which applies deep-learning technology to play any piece in Gould’s style with the aid of sheet music data.

It also includes Yamaha’s original AI Music Ensemble technology, enabling the system to analyse the performances of human pianists and play alongside them.

“To bring artificial intelligence into connection with music should be the beginning of a discussion that searches to expand and improve our virtuoso actions”

“To bring artificial intelligence into connection with music should not end in a competition, but should be the beginning of a discussion that searches to improve us and to expand and improve our virtuoso actions,” comments Martin Honzik, senior director of Ars Electronica Festival, Prix and Exhibitions divisions.

Brian M Levine, executive director of the Glenn Gould Foundation, recommends the project be “taken into the music mainstream” due to the “keen interest”, “great deal of attention” and “spirited debate” it will generate.

The AI piano concert marks Yamaha’s latest foray into live music AI, following the reproduction of the voice of Japanese singer Hibari Misora through its Vocaloid:AI singing synthesis technology.

According to Yamaha’s senior general manager of research and development division, Koichi Morita, the aim of such AI projects is to expand “the boundaries of musical creativity”.

“By sharing some of our ongoing results with music enthusiasts at Ars Electronica,” says Morita, “I feel we have taken another step toward realising these new possibilities.”


Get more stories like this in your inbox by signing up for IQ Index, IQ’s free email digest of essential live music industry news.

American Pi: AI reimages Don McLean for Pi Day

To celebrate Pi Day, Amadeus Code, an artificial intelligence-powered songwriting assistant, has composed a new song, ‘We Started Singing’, inspired by Don McLean’s 1972 classic, ‘American Pie’.

The song – written entirely by software – is one of 99,750^1,619,558 songs theoretically capable of being composed by Amadeus Code, according to the company, whose AI platform draws inspiration from “centuries of music” to provide songwriters with melodic ideas (but not lyrics).

Pi Day is an annual celebration of the mathematical constant pi (π), approximately equal to 3.14159. It is celebrated on 14 March (3/14, in the American month/day date format).

To create ‘We Started Singing’, Amadeus Code adjusted the beats per minute to 136 (‘American Pie’ is 140 BPM) “to accomodate a half tempo of 68 in sections of the new song”.

“Amadeus Code is currently on a path to creating the richest knowledge base in history”

Note length, one of the other variables, was set to longer than the default value, to produce a less-busy melody, while backing vocals (also computer generated) were created by copying a separate melody created by the app.

“Like how π is an infinitely expanding number,” the company says, “in terms of creating melodies, Amadeus Code is currently on a path to creating the richest knowledge base in history.”

Listen to ‘We Started Singing’ above.

A separate AI, DataRobot, correctly predicted Childish Gambino’s ‘This is America’ as winner of song of the year at the recent Grammy Awards.


Get more stories like this in your inbox by signing up for IQ Index, IQ’s free digest of essential live music industry news, via email or Messenger.

AI correctly predicts Grammys 2019 song of the year

American data scientists correctly predicted Childish Gambino’s song of the year win at the 61st Grammy Awards, held at Staples Center in Los Angeles last night (10 February).

Using its machine learning platform, Boston, Massachusetts-based DataRobot analysed all Grammy song of the year winners since 1959, identifying common traits – including the genre of the song, amount of profanity, general sentiment, total word count and various audio features derived from Spotify, such as tempo, time signature, key and duration – to determine this year’s most likely victor.

After six minutes, during which the artificial intelligence (AI) generated 140 data models, DataRobot’s Taylor Larkins established the best model, which “performed about 44% better than randomly guessing during my testing period [from 2012–2018],” he explains.

“Machine learning … can have applications well beyond the traditional ones we are used to seeing in fields such as banking or insurance”

This model correctly predicted Gambino’s ‘This is America’ as most likely song of the year candidate, with Lady Gaga and Bradley Cooper’s ‘Shallow’, from A Star is Born, as a close runner-up (screenshot courtesy of DataRobot):

DataRobot Grammy 2019 predictions

“With this experiment, we’re demonstrating that machine learning cannot only be fun but can also have applications well beyond the traditional ones we are used to seeing in fields such as banking or insurance,” explains Larkin.

“The music industry could tap into its potential, studying what makes a song successful and understanding why people listen to the songs that they do. With the volume of great music being produced, having quick insights into song popularity could be another tool to help musicians and music producers to refine their expertise.”

Kacey Musgraves won the Grammy for album of the year, for Golden Hour, with Dua Lipa taking home the prize for best new artist. Gaga and Cooper’s ‘Shallow’, meanwhile, won best pop duo/group performance.

See the full list of winners at the Recording Academy website.


Get more stories like this in your inbox by signing up for IQ Index, IQ’s free email digest of essential live music industry news.

AI creates “digital twins” for entertainment industry

Oben, a company specialising in personal artificial intelligence (PAI) technology, has created the first-ever AI entertainment hosts, who presented Chinese New Year programming together with their human counterparts.

On 28 January, the well-known television hosts Beining Sa, Xun Zhu, Bo Gao and Yang Long hosted China Central Television’s (CCTV) Network Spring Festival Gala alongside their “digital twins”, courtesy of Oben’s PAI technology.

An accompanying WeChat mini-app allowed viewers to use any of the four PAI hosts to send personalised new year’s greetings to friends and family. The celebrity PAIs delivered video messages to recipients, much in the way that human celebrities record personalised voicemails or Instagram videos for fans.

“The ‘digital twins’ facilitate new ways to engage viewers and fans in more personalised and unique experiences”

The PAIs created by Oben can look and sound like anyone in the world, constituting believable digital replicas of famous human figures. Using AI, the avatars can be taught to sing in another’s voice, perform specific dances and interact with fans through mobile devices.

The “digital twins” facilitate new ways to engage viewers and fans in more personalised and unique experiences. The technology has proved popular in the entertainment industry and Oben has worked on several celebrity partnerships.

The company is expanding into the music industry too. Oben recently released a human/ PAI duet music video with popular Chinese female idol group SNH48. The “digital twins” join their human counterparts in the video to sing, dance and interact with the band.


Get more stories like this in your inbox by signing up for IQ Index, IQ’s free email digest of essential live music industry news.

Snap debuts AI-powered Crowd Surf at Outside Lands

Snapchat developer Snap Inc. used last weekend’s Outside Lands festival in Golden Gate Park, San Francisco, as the debut for a new feature for the app: Crowd Surf, which stitches together audience ‘snaps’ to create a multi-angle video account of a concert or live event.

Snap deployed Crowd Surf during Lorde’s performance on Sunday 13 August, synchronising the audio using artificial intelligence from multiple fans filming the New Zealand singer to create an interactive Snapchat ‘story’ in which viewers can cycle between different crowd perspectives using a button on their smartphone screen.

Tech site Mashable has a video demonstrating Crowd Surf during Lorde’s song ‘Green Light’, showing multiple angles, including crowd selfies and the view from stage left.

A Snap spokesperson says Crowd Surf will be available at select events in future.

According to Mashable, with Crowd Surf Snap “hope[s] to bolster its Stories feature so that users submit to them more and also spend more time watching them. That’s good for Snap Inc. The more time users spend with Stories, the more likely they’ll be served an ad, which contributes to the majority of Snap’s revenue.” Snap Inc. posted disappointing financial results in Q2 2017 with a loss of US$443 million, below Wall Street forecasts.

Both Live Nation and AEG Live/Presents have agreed commercial partnerships around their festivals with Snap, with advertisers and sponsors using Snapchat to target festivalgoers. The former has, since last September, also sold tickets on the platform.


Get more stories like this in your inbox by signing up for IQ Index, IQ’s free email digest of essential live music industry news.

How AI is making the music biz more intelligent

Artificial intelligence often brings to mind thoughts of robots performing mundane household tasks or acting as opponents in chess games. The movie industry, meanwhile, imagines a world where artificial intelligence takes over the planet and destroys human existence.

While it is easy see a place for artificial intelligence (AI) in applications that require logic, mathematics and pattern recognition, the received wisdom is that it does not have a place in creative pursuits. The arts are reserved for the human experience of creating art, music and dance as an outward expression of emotion. At least, this is what people once believed – until new developments in AI started to prove that this theory might not be true.

In their most basic form, musical compositions are a series of algorithms combining patterns and chords. With enough creativity, programmers should be able to write the code that teaches computers how to compose music. In fact, a few companies have already created AIs capable of composing music.

Aiva Technologies, the creator of AIVA (Artificial Intelligence Virtual Artist), is one the leading startups in AI music composition. Its technology composes classical music compositions used by advertising agencies, film directors and game studios. AIVA is a set of neural networks programmed to study the fundamentals of music theory, as well as a vast library of classical music by composers such as Bach, Mozart, and Beethoven. With all of this information, AIVA creates new musical compositions in a matter of minutes.

Jukedeck is another start-up creating an advanced neural network capable of complex music composition. The artificial intelligence studies a wide variety of musical compositions and learns to predict note and chord patterns. Although Jukedeck acknowledges its AI still has a lot of learning to do, its results show a level of growth humans could only hope to experience within two short years. The company believes its AI composer will one day be as good as any human.

Artificial intelligence is revolutionising the way musicians learn and create music – how society thinks about art and the creative process

Although it may take a few more years for artificial intelligence to truly master the art of music composition, AI has already transformed the world of music education. Before artificial intelligence applications, learning a new instrument without the assistance of a teacher was extraordinarily difficult. Budding musicians could get a book or video and learn to make a sound and play some notes. They could not, however, get any feedback about their performance. Without a teacher, the musician never knew if something was wrong or how it could be improved.

Today, however, musicians use AI applications to teach and provide instant feedback about their performance. The artificial intelligence analyses the sound and provides feedback on factors such as tone, timing and correct notes, with a variety apps available for desktop computers and mobile devices.

Most of the applications available are for either piano or guitar. SimplyPiano, Yousician, and Piano Maestro all provide customised piano lessons with real-time feedback to students. The Ultimate Picking Program was developed by Allen Van Wert, who is known as one of the world’s fastest guitar pickers. He wanted to develop a program that would help him and other guitarists to specifically improve their picking technique. Yousician also has lessons available for the bass guitar and ukulele, but for now musicians have to wait for AI programs that teach other instruments.

So far, AI is not replacing the artistry and creativity of songwriters, producers, composers, and musicians. Artificial intelligence is, however, revolutionising the way musicians learn and create music – and the way society thinks about art and the creative process.


Grahame Ferguson is director, which has brought its silent disco shows to Creamfields, Wychwood Festival, Firefly Music Festival, Glastonbury Festival, Bestival, End of the Road, Download and more.

TickX unveils world-first Messenger chatbot

UK tech start-up TickX – the ticket search engine, or ‘Skyscanner for live events’, which last year turned down £75,000 in funding from BBC’s Dragons’ Den – has taken the wraps off its first Facebook chatbot, which it hopes will revolutionise the ticket-buying process.

Developed by a team led by Aayush Chadha, an 18-year-old student of artificial intelligence at the University of Manchester, the bot plugs directly into TickX’s search engine, allowing users to search for tickets to more than 70,000 events from 35+ sellers without ever leaving the Facebook Messenger app.

“The benefits to users are twofold,” Sam Coley, TickX’s co-founder and CTO, tells IQ. “Firstly, chatbots make it quicker and easier to get answers to complex questions. For example, you can ask TickX, ‘When is the cheapest ticket to see The Lion King in July?’, and one second later have the answer and link to compare and buy tickets. The second benefit is that millions of people spend hours on Facebook Messenger each day, so now they can click straight into TickX in one click – [there are] no apps to download, and no need to open a website.”

TickX is the first event ticketing company to take advantage of conversational commerce on the Facebook Messenger app, which has more than 1.2 billion monthly users.

“You can ask TickX, ‘When is the cheapest ticket to see The Lion King in July’, and one second later have the answer and a link to buy tickets”

The launch of the new bot – which goes live on 1 June – follows the pilot launch of a chatbot for Skype by StubHub last August, although the StubHub app is restricted to its own marketplace (one of many crawled by TickX). Seattle start-up ReplyYes, meanwhile, has made a success of selling merch and vinyl via standard text messages.

Coley says the feedback to the beta version of the bot has been “incredibly positive”, although he reveals the company, which is backed by £925,000 in private-equity funding, is already working on its next innovation.

“This Messenger bot really is just the first step for us in making it easier to search events and compare tickets,” he explains. “Over the next few months, alongside continuing to improve our Facebook application we’ll also be rolling out to voice-based assistances such as Amazon Alexa.”

Watch the chatbot in action below – or try it for yourself at


Get more stories like this in your inbox by signing up for IQ Index, IQ’s free email digest of essential live music industry news.

Iron men play Iron Man

Signing opportunity of the week?

Meet guitarist Fingers, bassist Bones and drummers Stickboy and Junior. Together they are Compressorhead: four robots who can play a pretty competent, if slightly out of time, cover of Black Sabbath’s ‘Iron Man’ and (if the Terminator franchise is anything to go by) could well be the death of us all.

Built from scrap metal by Berlin-based artists Frank Barnes, Markus Kolb and Stock Plum, Compressorhead move using electro-pneumatic motors and are controlled using MIDI sequencers.

Barnes, Kolb and Plum launched a Kickstarter campaign late last year to try and recruit the band a robotic lead singer, but fell short of their €290,000 funding goal.

Watch them in action in Moscow in the video above.


Get more stories like this in your inbox by signing up for IQ Index, IQ’s free email digest of essential live music industry news.