Quite a few interesting trends this week in AI. On the fundraising side, we saw three different AI startups focused on infrastructure raise capital. I can see this trend continuing as there is no shortage of energy, transportation, and other infrastructure-related assets that could benefit from improvements in maintenance efficiencies through the use of AI. In addition, Google’s DeepMind has made progress on its calculation to optimize Google’s cooling frameworks for its server farms.
Company developments:
Google just gave control over server farm cooling to an AI – Aug 18, 2018 (Techstory)
- Over the recent years, Google has been trying a calculation that figures out how best to alter cooling frameworks—fans, ventilation, and other hardware—so as to bring down power utilization. This framework already made suggestions to server farm supervisors, who might choose whether or not to execute them, prompting vitality investment funds of around 40 percent in those cooling frameworks
- The calculation abuses a method known as support realizing, which learns through experimentation. A similar approach prompted AlphaGo, the DeepMind program which vanquished human players of the tabletop game Go (see “10 Breakthrough Technologies: Reinforcement Learning”)
- DeepMind encouraged its new calculation data accumulated from Google server farms and let it figure out what cooling setups would decrease vitality utilization. The task could create a huge number of dollars in vitality investment funds and may enable the organization to bring down its carbon emanations, says Joe Kava, VP of server farms for Google
China AI unicorn SenseTime launches automatic ‘touch-up’ tool for self-conscious live-streamers – Aug 17, 2018 (South China Morning Post)
- Powered by AI-backed technology, the filter identifies different parts of a user’s body and face, and touches them up automatically to enhance their look – without distorting the background. The feature marks a major step up from existing products in the market that allow users to mainly “beautify” still pictures of faces and selfies, according to SenseTime
- “It’s a convenience feature – for example, imagine you feel like live streaming at 10pm but want to skip putting on make-up just for that,” said Li Xingye, SenseTime vice-president of internet and adverts business, at a product launch event held in Beijing on Wednesday. Li said the company will seek to embed the touch-up function into the live-streaming apps of major players via fee-paying partnerships
- Founded at the Hong Kong Science Park in 2014, SenseTime specialises in facial recognition and computer vision, and has positioned itself as a “platform company” for AI technologies. Known for providing AI-powered surveillance software for China’s police, SenseTime said it achieved profitability last year on the back of selling AI-powered applications for smart cities, surveillance, smartphones, internet entertainment, finance, retail and other industries
Incentivai launches to simulate how hackers break blockchains – Aug 17, 2018 (TechCrunch)
- That’s why Incentivai is coming out of stealth today with its artificial intelligence simulations that test not just for security holes, but for how greedy or illogical humans can crater a blockchain community. Crypto developers can use Incentivai’s service to fix their systems before they go live
- “There are many ways to check the code of a smart contract, but there’s no way to make sure the economy you’ve created works as expected,” says Incentivai’s solo founder Piotr Grudzień. “I came up with the idea to build a simulation with machine learning agents that behave like humans so you can look into the future and see what your system is likely to behave like.”
- Incentivai will graduate from Y Combinator next week and already has a few customers. They can either pay Incentivai to audit their project and produce a report, or they can host the AI simulation tool like a software-as-a-service. The first deployments of blockchains it’s checked will go out in a few months, and the startup has released some case studies to prove its worth
YC-backed Sterblue aims to enable smarter drone inspections – Aug 17, 2018 (TechCrunch)
- The startup’s software is specifically focused on enabling drones to easily inspect large power lines or wind turbines with simple automated trajectories that can get a job done much quicker and with less room for human error. The software also allows the drones to get much closer to the large structures they are scanning so the scanned images are as high-quality as possible
- Compared to navigating a tight urban environment, Sterblue has the benefit of there being very few airborne anomalies around these structures, so autonomously flying along certain flight paths is as easy as having a CAD structure available and enough wiggle room to correct for things like wind condition
- Operators basically just have to connect their drones to the Sterblue cloud platform where they can upload photos and view 3D models of the structures they have scanned while letting the startup’s neural net identify any issues that need further attention. All and all, Sterblue says their software can let drones get within three meters of power lines and wind turbines, which allows their AI systems to easily detect anomalies from the photos being taken. Sterblue says their system can detect defects as small as one millimeter in size
Kroger rolls out driverless cars for grocery deliveries – Aug 16, 2018 (KCRA 3)
- Kroger’s pilot program launched Thursday morning with a robotic vehicle parked outside one of its own Fry’s supermarkets in Scottsdale. A store clerk loaded the backseat with full grocery bags. A man was in the driver’s seat and another was in the front passenger seat with a laptop. Both were there to monitor the car’s performance
- Under the self-driving service, shoppers can order same-day or next-day delivery online or on a mobile app for a flat rate of about $6. After the order is placed, a driverless vehicle will deliver the groceries curbside, requiring customers to be present to fetch them. The vehicles will probably be opened with a numeric code
- Currently, Kroger is operating with Toyota Prius vehicles. During the next phase of testing in the fall, deliveries will be made by a completely autonomous vehicle with no human aboard. Kroger Co. is partnering with Nuro, a Silicon Valley startup founded by two engineers who worked on autonomous vehicles at Google
Ford reveals autonomous vehicle philosophies, priorities – Aug 16, 2018 (Automotive News)
- The 44-page report covers Ford’s goals, philosophy, priorities and technical approach to self-driving vehicle development, which it states “is not a race.”
- “You don’t bolt on safety. It has to be ingrained in your culture and every decision you make,” Bryan Salesky, CEO of Argo AI, Ford’s autonomous vehicle development partner, told Automotive News Wednesday. “That’s the culture at Ford Motor Co. and that’s the culture we’ve created at Argo. … It’s about creating a principled process to guide how you develop the product, and that’s what you see in the report.”
Google is reportedly developing an AI assistant that recommends workouts and meal plans – Aug 15, 2018 (VentureBeat)
- Fitness will be Google Coach’s bread and butter, Android Police reports. But unlike Google Fit, Google’s activity-tracking platform, it’ll deliver insights proactively, informed in part by calendar appointments, reminders, and logged activities. If you skip a scheduled gym day, for example, it might nudge you to find another time. And if you’re falling short of a fitness goal, it could suggest workouts and routines that would help you achieve it
- Google Coach’s suggestions will come in the form of notifications and messages, mostly, but according to Android Police, algorithms will attempt to consolidate multiple ideas in one so as to prevent “notification overload.” The AI-powered assistant will reportedly launch first on smartwatches — live activity tracking will require a Wear OS device — but eventually come to smartphones, set-top boxes, smart speakers, and other devices “in some capacity.”
- The idea isn’t a novel one. Startups like Noom leverage AI to analyze diet, weight, and exercise and suggest recipes and workouts. And Vi, a New York-based company that raised $20 million in June, sells earbuds with a conversational running coach who provides feedback in real time
New Uber feature uses machine learning to sort business and personal rides – Aug 13, 2018 (TechCrunch)
- Uber announced a new program today called Profile Recommendations that takes advantage of machine intelligence to reduce user error when switching between personal and business accounts
- Uber has been analyzing a dizzying amount of trip data for so long, it can now (mostly) understand the purpose of a given trip based on the details of your request. While it’s certainly not perfect because it’s not always obvious what the purpose is, Uber believes it can determine the correct intention 80 percent of the time. For that remaining 20 percent, when it doesn’t get it right, Uber is hoping to simplify corrections too
- Business users can now also assign trip reviewers — managers or other employees who understand the employee’s usage patterns, and can flag questionable rides. Instead of starting an email thread or complicated bureaucratic process to resolve an issue, the employee can now see these flagged rides and resolve them right in the app. “This new feature not only saves the employee’s and administrator’s time, but it also cuts down on delays associated with closing out reports,” Gurion wrote in the blog post announcement
M&A:
Intel acquires AI startup Vertex.ai – Aug 16, 2018 (VentureBeat)
- Vertex.ai will join the chipmaker’s Artificial Intelligence Products Group, according to a note on its website, where it’ll “support a variety of hardware” and work to integrate PlaidML, its “multi-language acceleration platform” that allows developers to deploy AI models on Linux, macOS, and Windows devices, with Intel’s nGraph machine learning backend. It’ll continue to develop the PlaidML, which is open source, under the Apache 2.0 license
- “Intel has acquired Vertex.ai, a Seattle-based startup focused on deep learning compilation tools and associated technology,” Intel said in a statement. “The seven-person Vertex.ai team joined the Movidius team in Intel’s Artificial Intelligence Products Group. With this acquisition, Intel gained an experienced team and IP to further enable flexible deep learning at the edge. Additional details and terms are not being disclosed.”
- Vertex.ai was founded in 2015 by Jeremy Bruestle and Choong Ng, with the mission of creating a framework that bridged the gap between hardware and AI-enabled software. It attracted seed money from Curious Capital and Toronto, Canada-based Creative Destruction Lab, among other investors
Fundraising / investment:
Startup focused on AI for infrastructure scores $4.5M funding – Aug 15, 2018 (Construction Dive)
- United Kingdom-based startup SenSat secured $4.5 million in seed funding for further development of its artificial intelligence product, which is targeted at the infrastructure construction sector, TechCrunch reported. The seed round was backed by Force Over Mass, Round Hill Venture Partners and Zag
- SenSat uses drone imagery and spatial data to create a real-time simulation of a real-world location. The simulation allows a computer to learn how things work in that space and to make optimal decisions based on a wide range of variables. SenSat said, for example, its product can learn design requirements for a particular project and in minutes, select the most efficient design approach out of thousands of options
- Infrastructure construction is the startup’s first target because of the availability of data on small- to medium-size areas and the potential of time and cost savings amounting to 40% of the project’s value, according to TechCrunch. The company plans to scale its product in size and complexity in the future
Clobotics raises $11 million for retail and wind energy AI – Aug 15, 2018 (VentureBeat)
- The Shanghai and Seattle firm today announced an $11 million funding round led by Nantian Infotech VC and Wangsu Science and Technology, both of which join previous investors KTB Network, GGV Capital, and Capital Development Investment Fund Management Co
- Clobotics, which was founded in 2015 by former Microsoft and Ehang executive George Yan, leverages its unique software computer vision platform to provide “real-time, data-driven insights” across verticals. With the infusion of new capital, it plans to expand its business in North America, invest in product development, and build on its team of computer vision, AI, and machine learning experts
- Clobotics Smart Wind, its bespoke solution for preventative wind turbine maintenance, combines autonomous drone hardware with AI-infused software. Its drones snap high-quality footage of turbines and apply machine learning models to identify weakened components and pass that information along to Clobotics’ cloud-hosted monitoring platform. They also share telemetry information with customers in real time. Clobotics’ Smart Retail recognizes product displays, barcodes, and assortments and makes suggestions to improve sales and profitability. Customers use a smartphone app to capture photos of store shelves, which Clobotics’ software-as-a-service platform parses in the cloud
Accenture Invests In China-Based AI Firm – Aug 13, 2018 (CRN)
- Accenture Monday said it has invested in Malong Technologies, a Shenzhen, China-based company focused on developing artificial intelligence technology for retail manufacturing and business applications, and has signed a strategic alliance agreement with the company
- Malong Technologies’ primary technology is ProductAI, which allows visual product search and tagging. ProductAI uses artificial intelligence to recognize a product based on its visible attributes similar to how a human would recognize the product without the need for barcodes
- In addition to Accenture’s investment of an unspecified sum in Malong Technologies, the two have signed an alliance to jointly develop industry solutions and prepare go-to-market activities. As part of the agreement, Malong Technologies has designated Accenture as its preferred systems integrator and consulting partner
Observe.AI raises $8M to use artificial intelligence to improve call centers – Aug 13, 2018 (TechCrunch)
- The funding round was led by Nexus Venture Partners, with participation from MGV, Liquid 2 Ventures and Hack VC. Existing investors Emergent Ventures and Y Combinator also took part — Observe.AI was part of the YC’s winter 2018 batch
- The India-U.S. startup was founded last year with the goal of solving a very personal problem for founders Swapnil Jain (CEO), Akash Singh (CTO) and Sharath Keshava (CRO): making call centers better. But, unlike most AI products that offer the potential to fully replace human workforces, Observe.AI is setting out to help the humble customer service agent
- The company’s first product is an AI that assists call center workers by automating a range of tasks, from auto-completing forms for customers, to guiding them on next steps in-call and helping find information quickly. Jain told TechCrunch in an interview that the product was developed following months of consultation with call center companies and their staff, both senior and junior. That included a stint in Manila, one of the world’s capitals for offshoring customer services and a city well known to Keshava, who helped healthcare startup Practo launch its business in the Philippines’ capital
Partnerships:
DeepMind’s AI can recommend treatment for more than 50 eye diseases with 94% accuracy – Aug 13, 2018 (VentureBeat)
- An AI system created by Google’s DeepMind Health, Moorfields Eye Hospital NHS Foundation Trust, and University College London (UCL) Institute of Ophthalmology can correctly determine how to refer optometry patients in 94 percent of cases, putting it on par with top human experts
- “The AI technology we’re developing is designed to prioritize patients who need to be seen and treated urgently by a doctor or eye care professional,” said UCL scientist Dr. Pearse Keane in a statement shared with VentureBeat. “If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research, it could lead to greater consistency and quality of care for patients with eye problems in the future.”
- Instead of just spitting out recommendations, the system can provide doctors with an explanation for why it chose to make a particular recommendation, as well as indicating a percentage of confidence it has in a suggested course of treatment
Research / studies:
Researchers develop AI that can re-create real-world lighting and reflections – Aug 15, 2018 (VentureBeat)
- Today during the 2018 Siggraph conference in Vancouver, the team jointly presented “Single-Image SVBRDF Capture with a Rendering-Aware Deep Network,” a method for extracting the texture, highlights, and shading of materials in photographs and digitally recreating the environment’s lighting and reflection
- “[V]isual cues … allow humans to perceive material appearance in single pictures,” the researchers wrote. “Yet, recovering spatially-varying bi-directional reflectance distribution functions — the function of the four variables that defines how light is reflected at an opaque surface — from a single image based on such cues has challenged researchers in computer graphics for decades. We tackle [the problem] by training a deep neural network to automatically extract and make sense of these visual cues.”
- The researchers started with samples — lots of them. They sourced a dataset of more than 800 “artist-created” materials, ultimately selecting 155 “high-quality” sets from nine different classes (paint, plastic, leather, metal, wood, fabric, stone, ceramic tiles, ground) and, after setting aside about a dozen to serve as a testing set, rendered them in a virtual scene meant to mimic a cellphone camera’s field of view (50 degrees) and flash
Intel and Philips use Xeon chips to speed up AI medical scan analysis – Aug 14, 2018 (VentureBeat)
- Philips Medical, Philips’ medical supply and sensor division, published the results of recent machine learning tests performed on Intel’s Xeon Scalable processors with its OpenVINO computer vision toolkit. Researchers explored two use cases: one on X-rays of bones to model how bone structures change over time, and the other on CT scans of lungs for lung segmentation (i.e., identifying the boundaries of lung from surrounding tissue)
- They achieved a speed improvement of 188 times for the bone-age-prediction model, which went from a baseline result of 1.42 images per second to a rate of 267.1 images per second. The lung-segmentation model, meanwhile, saw a 38 times speed improvement, processing 71.7 images per second after optimizations, up from 1.9 images per second
- Intel contends that its processors, rather than the powerful graphics cards popularly used to train and run machine learning models, have a critical advantage when it comes to computer vision: the ability to handle larger, more memory-intensive algorithms
Researchers use AI to match patients with primary care doctors – Aug 14, 2018 (VentureBeat)
- Researchers at Wright State University, University of California, Davis, and Universidade Nova de Lisboa think artificial intelligence (AI) has a role to play. In a new paper (“A Hybrid Recommender System for Patient-Doctor Matchmaking in Primary Care“), they propose a recommender system they claim makes primary care providers “more directly accessible” by improving patient-doctor matches
- “Given that trust in patient-doctor relationships plays a central role in improving patients’ health outcomes and satisfaction with their care, it would be preferable to match patients with family doctors that they are willing to consult with high trust,” they wrote. “[Our approach] generate[s] personalized doctor recommendations for each patient that they may trust the most.”
- They sourced data from a private health care provider and clinical network in Portugal that serves over 2.5 million patients a year. With a database of 42 million interactions between patients and doctors (“interactions” defined here as episodes that included a set of services provided to treat a clinical condition) between 2012 and 2017, plus basic demographic information (gender, age, residence, etc.), doctor registration data, and a complementary dataset describing hospital inpatient procedures in hand, they set about training the system
Nvidia DGX1-V Appliance Crushes NLP Training Baselines – Aug 13, 2018 (Next Platform)
- A research team from Nvidia has provided interesting insight about using mixed precision on deep learning training across very large training sets and how performance and scalability are affected by working with a batch size of 32,000 using recurrent neural networks
- The big batch size was parallelized across 128 Volta V100 GPUs in the Nvidia DGX1-V appliance for unsupervised text reconstructions over 3 epochs of the dataset in four hours. This time to convergence is worth noting but so is the complexity of scaling a recurrent neural network on such a large batch size, which has implications for the overall learning rate compared to other approaches to training models on large natural language datasets
- With the DG1-V system comes NCCL2 (Nvidia Collective Communications library) for intranode communications which uses NVLink and InfiniBand connections to allow the GPUs to talk to one another
Government / policy:
CPB launches facial recognition tech at San Jose Airport, more airports next – Aug 18, 2018 (Citizen Truth)
- U.S. Customs and Border Protection (CBP) has installed facial recognition technology at the Mineta San Jose International Airport (SJC) in Silicon Valley. The technology is already being used to process international travelers entering and leaving the country and is one of the first major West Coast airports to implement the technology. The busy international airport is just a few miles from the headquarters of both Facebook and Google among other major tech companies
- “As one of the nation’s main regions of innovation, Silicon Valley is at the forefront of transforming the travel experience through biometrics,” said CBP Commissioner Kevin McAleenan in a statement. “CBP is excited to partner with SJC, which serves as another example of what we can achieve by advancing the entry/exit mandate through public-private collaboration, adding benefits for travelers and stakeholders across the air travel ecosystem.”
- Several other airports in the country have begun to use facial recognition systems to capture passengers exiting their facilities including JFK in New York, O’Hare in Chicago and Dulles in Washington. However, SJC is among the first to start using it for passengers arriving into an airport