Ending weeks of speculation around its spy-like operations, Apple has finally delivered what many martech leaders would call the “Golden Egg” of 2016. As the year draws to an end, Apple developers have certainly managed to make a splash on the AI research community with their very first technology paper. Apple’s AI paper, “Learning from Simulated and Unsupervised Images through Adversarial Training,” was published through the Cornell University Library on 22 December.
What’s the academic significance of this Apple’s AI paper?
Firstly, this is the first public release of any technology-related research by Apple.
Secondly, it is seen as a precursor laboratory finding of how martech innovators are switching gears to leverage AI technology to build ROI-centric attributions.
It is evident from the paper that days of image labeling and tagging are finally over. Synthetic imaging too is turning obsolete. What Apple’s AI paper prescribes as a solution to mitigate image processing woes is called ‘simulated imaging’. Artificially generated images perceived accurately by implementing AI and machine learning catalyze the whole idea of adopting context-rich systems in future.
Prospects of AI-based Image Recognition Technology in Marketing
2 billion images are posted on social media per day. Revenue-wise, the image recognition technology market is expected to grow at a CAGR of 19%. By 2020, it will be pegged at US $28 billion. From marketing POV, visual analytics and social listening are key contextual marketing insights that help marketers visualize consumer behavior towards a particular brand. Apple has identified this marketing stream as its base for AI research.
Integration of AI with image recognition APIs will allow marketers to glean through vast pool of images. AI-based image recognition target relevant customer base using image-optimized landing pages for websites, responsive mobile apps and video advertising platforms. Brands can therefore generate higher revenues by enhancing user engagement, making visual recommendations and providing auto-categorization for instant content discovery.
Top use cases for #ArtificialIntelligence:
— Sean Gardner 🌐 (@2morrowknight) December 15, 2016
Vision-based autonomous chatbots and intelligent assistants can assist customers during shopping. Pattern recognition, computer vision and augmented reality will merge together, rendering accurate real-time imaging to the users.
Retail, healthcare, tourism, automobile, media and entertainment, defense, real estate and other diversified businesses will achieve higher ROI by implementing image recognition technologies in their marketing stack.
Apple’s Latest Fascination: “Simulated + Unsupervised” Machine Learning
Through the paper, Apple proposes to deploy an AI platform as a simulator providing “Simulated + Unsupervised” (U+I) learning to human operators. The S+U learning is based on large-scale Generative Adversarial Networks (GANs) working on push-pull arrangement of two competing networks—generator and discriminator.
S+U learning has proved to be successful so far in providing ‘super-resolution’ images, which means Apple could also vouch for a video stimulation in the near future. In short, Apple intends to magnify its AI research to “refine videos”.
Apple’s S+U image recognition algorithm can process up to 3000 images per second using only one-third of its GPU stack. Unlike Google or Facebook, Apple is satisfied with the pace of its AI efforts over standard GPUs.
Evolution on Cards: More AI-Centric Researches From Apple
Colloquially referred to as a very secretive company, this is the ‘Eureka Moment’ in Apple’s academic history. It has laid its insights on AI and its impact on computer vision technology. A drastic shift from its usual secretive trading policy, it is the first time that the AI research community will know what kind of resources and thoughts are erupting inside Apple’s Cupertino labs. And, this is definitely not the last research paper on AI. In October 2016, Apple affirmed its commitment to futuristic machine learning by appointing Ruslan Salakhutdinov as the Director of AI Research.
The first hint of Apple publishing its own research on paper came in mid-November 2016. Then, on 6 December, it hosted an invitation-only lunch to discuss the problems that AI will be able to successfully tackle in future.
Apple’s AI paper discusses its insights on machine learning and its applications in predictive analysis, image processing, intelligent assistance and facial recognition. Since AI is a hugely competitive playground now, Apple intends to utilize the ideas from AI research communities.
For instance, the paper speaks about how marketers can use AI technologies to stimulate the original object rather than relying on its image. The day may not be far when customers can use AI to distinguish between two rival products and experience them in real-life adaptation.
Boost To Apple’s Martech Talent Acquisition Strategy
The findings from the latest Apple paper on AI is not as revolutionary as publishing its research on a public platform in the first place. This is an unprecedented event in Apple’s history when it has allowed its employees to publish research-related papers. Earlier, Google and Facebook also removed all restrictions on their employees from publishing any research work.
Apple’s maiden research paper is authored by its vision expert Ashish Shrivastava, assisted by a team of AI researchers and engineers. With its newfound love for scholarly papers on merging martech, Apple could very well realign its recruitment efforts. The open policy will boost Apple’s chances of hiring rich AI talent across the tech-establishments. So what if Apple shelled out billions to acquire AI-startups in recent months? In the game of acquisitions, Apple seems to have taken the high-road finally — attract rich AI talent over startups.
Apple employees can finally tell what they do rather than wait for the product to show up and narrate their story. Heck of an AI gold mine awaits martech innovators in 2017. Cheers and grab an AI bite!