Posts

Showing posts from March, 2018

5 Myths About Cognitive Computing

Image
Artificial intelligence (AI) is one of the most frequently discussed topics in business today, but even more than most new technologies, its promise is sometimes obscured by a set of lingering myths—particularly among those whose exposure to the technology has been limited. Professionals with first-hand experience have a different perspective, according to the 2017 Deloitte State of Cognitive Survey, which is based on interviews with 250 business executives who have already begun adopting and using AI and cognitive technologies. The responses of these early adopters shed considerable light on the current state of cognitive technology in organizations. Along the way, they help dispel five of the most persistent myths. Myth 1:  Cognitive is all about automation It is rare to find a media report about AI that doesn’t speculate about job losses. Much of the reason for that is the commonly held belief that the technology’s primary purpose is automating human work. But that’s hardly the

Bias in AI Increasingly Recognized; Progress Being Made

Image
Bias in AI decision-making and in the algorithms of machine learning, has been outed as a real issue in the march of AI progress. Here is an update on where we are and efforts being made to recognize bias and counteract it, including a discussion of selected AI startups. AI reflects the bias of its creators, notes Will Bryne, CEO of Groundswell in a recent article in Fast Company . Societal bias – the attribution of individuals or groups with distinct traits without any data to back it up – is a stubborn problem. AI has the potential to make it worse. “The footprint of machine intelligence on critical decisions is often invisible, humming quietly beneath the surface,” he writes. AI is driving decision-making on loan-worthiness, medical diagnosis, job candidates, parole determination, criminal punishment and educator performance. How will AI be fair and inclusive? How will it engage and support the marginalized and most vulnerable in society? Courts across the US are using a softw

Human Back-up Drivers for AI Self-Driving Cars

Image
By Lance Eliot, the AI Trends Insider How would you like to work as a human back-up driver in a state-of-the-art AI self-driving car? Sounds glamorous. You can impress your friends and colleagues by bragging about going around town in the future of automobiles. You are the future. In a sense, you feel like an astronaut that is taking us to new planets and to new horizons. Or, there’s another view. You sit in a car all day long, waiting to see if you need to do anything. Most of the time, you essentially do nothing. You are a cog in the great AI machine. Machines are taking over, and you are helping this to happen. You are the enemy of humanity. In the parlance of Star Trek, you are a dunsel (this was a term used in the fictional Star Trek series and was a word used by the Federation to refer to someone that had no particular useful purpose). What’s the truth? Pretty much the job is more towards the less glamorous side. Indeed, as I’ll explain next, it’s a thankless kind of job t

Here Are Six Machine Learning Success Stories

Deep Learning: It’s Time for AI to Get Philosophical

Image
By Catherine Stinson, postdoctoral scholar at the Rotman Institute of Philosophy, University of Western Ontario, and former machine-learning researcher I wrote my first lines of code in 1992, in a high school computer science class. When the words “Hello world” appeared in acid green on the tiny screen of a boxy Macintosh computer, I was hooked. I remember thinking with exhilaration, “This thing will do exactly what I tell it to do!” and, only half-ironically, “Finally, someone understands me!” For a kid in the throes of puberty, used to being told what to do by adults of dubious authority, it was freeing to interact with something that hung on my every word – and let me be completely in charge. For a lot of coders, the feeling of empowerment you get from knowing exactly how a thing works – and having complete control over it – is what attracts them to the job. Artificial intelligence (AI) is producing some pretty nifty gadgets, from self-driving cars (in space!) to automated medica

Domains in Which Artificial Intelligence is Rivalling Humans

Prologue Every decade seems to have its technological buzzwords: we had personal computers in the 1980s; Internet and worldwide web in 1990s; smartphones and social media in 2000s; and Artificial Intelligence (AI) and Machine Learning in this decade. However, the field of AI is 67 years old, and this is the third in a series of five articles wherein: The first article discusses the genesis of AI and the first hype cycle during 1950 and 1982 The second article discusses a resurgence of AI and its achievements during 1983-2010 This article discusses the domains in which AI systems are already rivalling humans The fourth article discusses the current hype cycle in Artificial Intelligence The fifth article discusses as to what 2018-2035 may portend for brains, minds and machines The Timeline   Domains where AI Systems are Rivaling Humans As mentioned in a previous article [56], the 1950-82 era saw a new field of Artificial Intelligence (AI) being born, a lot of pioneeri

Reuters Reports that Nvidia to Suspend Self-driving Tests Globally

Nvidia Corp said on Tuesday it will suspend self-driving tests across the globe after Uber Technologies Inc [UBER.UL] fatality last week, according to a source at the GPU conference in San Jose, California. The chipmaker is testing self-driving technology globally including in New Jersey, Santa Clara, Japan and Germany. Last week, Uber suspended North American tests of its self-driving vehicles after one of its self-driving cars killed a woman in Arizona. Source: Reuters from AI Trends https://ift.tt/2pJhNiJ via IFTTT

The AI Future for NVIDIA: Interview with CEO Jensen Huang

Image
It’s about making  the lives of scientists and researchers easier, Jensen Huang, CEO and co-founder of Nvidia tells TechCrunch. He’s speaking of the keynote address he intends to give at the company’s upcoming GTC conference. Held in San Jose, miles away from the company’s new imposing headquarters, Nvidia  is set to host thousands of attendees from the world’s top artificial intelligence, automotive and gaming companies. To Huang, his address needs to inspire and entertain. That should be easy. He’s naturally inspiring and entertaining. Huang has risen to the elite among Silicon Valley’s visionary leaders. Scores of reports show Nvidia employees love working for him and his addresses are often technical yet accessible. He commands an audience through his passion for the technology his company is creating. He’s been at the helm of Nvidia since co-founding the company at age 30 in 1993 and has led Nvidia from the maker of computer graphics cards to become the premier platform for art

Supply Chain is ‘Killer’ Blockchain Use Case, says IDC

Image
After more than enough hype, blockchain use cases are lately becoming more numerous and real. In fact, improving supply chain processes will be the “killer” among blockchain uses cases, according to Bill Fearnley, IDC research director of worldwide blockchain strategies, who spoke at the IDC Directions 2018 conference held recently in Boston. “I get asked often ‘What are the killer [blockchain] use cases?’ and [supply chain] is it,” Fearnley said. Blockchain ledgers can help improve the supply chain in three ways: shipment track and trace, inventory management and proving a product is genuine. “Increasingly consumers and manufacturers are getting more concerned and more aware of country of origin,” Fearnley said. “For example, you’re trying to make sure that you’re not getting conflict minerals — you want to make sure that the country of origin is the right place to be buying products from, that it’s not in a war zone or from a criminal element.” Other blockchain use cases are fou

AI Can Help Win the War Against Fake News

Image
It may have been the first bit of fake news in the history of the Internet: in 1984, someone posted on Usenet that the Soviet Union was joining the network. It was a harmless April’s Fools Day prank, a far cry from today’s weaponized disinformation campaigns and unscrupulous fabrications designed to turn a quick profit. Today, misleading and maliciously false online content is so prolific that we humans have little hope of digging ourselves out of the mire. Instead, it looks increasingly likely that the machines will have to save us. One algorithm meant to shine a light in the darkness is AdVerif.ai, which is run by a startup of the same name. The artificially intelligent software is built to detect phony stories, nudity, malware, and a host of other types of problematic content. AdVerif.ai, which launched a beta version in November 2017, currently works with content platforms and advertising networks in the United States and Europe that don’t want to be associated with false or pot

When AI Goes Awry – No Tools, No Clear Methods to Debug AI Software

Image
The race is on to develop intelligent systems that can drive cars, diagnose and treat complex medical conditions, and even train other machines. The problem is that no one is quite sure how to diagnose latent or less-obvious flaws in these systems or better yet, to prevent them from occurring in the first place. While machines can do some things very well, it’s still up to humans to devise programs to train them and observe them, and that system is far from perfect. “Debugging is an open area of research,” said Jeff Welser, vice president and lab director at IBM Research Almaden. “We don’t have a good answer yet.” He’s not alone. While artificial intelligence, deep learning and machine learning are being adopted across multiple industries, including semiconductor design and manufacturing, the focus has been on how to use this technology rather than what happens when something goes awry. “Debugging is an open area of research,” said Norman Chang, chief technologist at ANSYS. “That

Initial Forensic Analysis of the Uber Self-Driving Car Incident in Arizona

Image
By Lance Eliot, the AI Trends Insider In my column published on April 27, 2017 (nearly a year ago), I stated this: “I expect that we will soon have a self-driving car crisis-in-faith because some self-driving car will plow into a pedestrian. It is bound to happen.” Sadly, prophetic. You might be aware that in Tempe, Arizona on Sunday, March 18, 2018 in the evening around 10 p.m., an Uber self-driving car, containing a human back-up operator at the wheel, ran into and killed a female pedestrian, 49-year-old Elaine Herzberg, whom had entered into the street outside of a crosswalk and in the front of the oncoming self-driving car.  Reportedly, the self-driving car was doing about 40 miles per hour when it hit the victim. Besides the terrible tragedy of having killed the pedestrian, an especially alarming aspect is that reportedly the self-driving car took no evasive action whatsoever, and, furthermore, the human back-up operator also reportedly took no evasive action. This comes as