AI, machine learning to create new kinds of work

The continued advancement of artificial intelligence (AI) inspires awe and fear. Some are amazed by the possibilities inherent in the technology. While others fear the impact of technology on jobs. Thus, it is a favourable move to expose oneself to the discussion of AI-related ethics and machine learning.

A recent article captured the mood with a short story. When the first printed books with illustrations started to appear in the 1470s in the German city of Augsburg, wood engravers rose up in protest. Worried about their jobs, they literally stopped the presses. In fact, their skills turned out to be in higher demand than before: Somebody had to illustrate the growing number of books, the report said.

“Once again, however, technology is creating demand for work. To take one example, more and more people are supplying digital services online via what is sometimes dubbed the ‘human cloud’. Counterintuitively, many are doing so in response to AI,” the report added.

Quoting a World Bank report, it said more than five million people have offered to work remotely on online marketplaces such as Freelancer.com and UpWork. Jobs range from designing websites to writing legal briefs, and typically bring in at least a few dollars an hour.

In 2016, such firms earned about US$6 billion (RM25.38 billion) in revenue, according to Staffing Industry Analysts, a market researcher. Those who prefer to work in smaller bites can use “micro-work” sites such as Mechanical Turk, a service operated by Amazon.com Inc. About 500,000 “turkers” perform tasks such as transcribing bits of audio, often earning no more than a few cents for each “human-intelligence task”.

Many big tech companies employ, mostly through outsourcing firms, thousands of people who police the firms’ own services and control quality. Google Inc is said to have an army of 10,000 “raters” who, among other things, look at YouTube videos or test new services. Microsoft Corp operates something called a “Universal Human Relevance System”, which handles millions of micro tasks each month, such as checking the results of its search algorithms, it added.

And it noted that these numbers are likely to rise, more so with the increasing demand for “content moderation”.

“AI will eliminate some forms of this digital labour — software, for instance, has got better at transcribing audio. Yet, AI will also create demand for other types of digital work. The technology may use a lot of computing power and fancy mathematics, but it also relies on data distilled by humans.

“For autonomous cars to recognise road signs and pedestrians, algorithms must be trained by feeding them lots of video showing both. That footage needs to be manually ‘tagged’, meaning that road signs and pedestrians have to be marked as such. This labelling already keeps thousands busy. Once an algorithm is put to work, humans must check whether it does a good job and give feedback to improve it,” the report added.

Ethical Issues

With the advances achieved by AI, people will naturally start talking about the ethical issues that crop up along the way.

The larger corporations are answering the challenge by taking it head on. DeepMind Technologies Ltd, Google’s London-based AI research sibling, for example, has opened a new unit focusing on the ethical and societal questions raised by AI.

In a recent report, The Guardian noted that the new research unit will aim “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all”.

Last year, DeepMind hit headlines for building the first machine to beat a world champion at the ancient Asian board game Go. It is now bringing in external advisors from academia and the charitable sector, including Columbia development professor Jeffrey Sachs, Oxford AI professor Nick Bostrom and climate change campaigner Christiana Figueres to advise the unit.

“These fellows are important not only for the expertise that they bring but for the diversity of thought they represent,” the report quoted the unit’s co-leads, Verity Harding and Sean Legassick, in a blogpost announcing its creation.

The unit, called “DeepMind Ethics and Society”, is not the AI Ethics Board that Deep-Mind was promised when it agreed to be acquired by Google in 2014. That board, which was convened by January 2016, was supposed to oversee all of the company’s AI research, but nothing has been heard of it in the 31⁄2 years since the acquisition. It remains a mystery who is on it, what they discuss, or even whether it has officially met.

DeepMind Ethics and Society is also not the same as DeepMind Health’s Independent Review Panel, a third body set up by the company to provide ethical oversight — in this case, of its specific operations in healthcare, the report added.

“Nonetheless, its creation is the hallmark of a change in attitude from DeepMind over the past year, which has seen the company reassessing its previously closed and secretive outlook. It is still battling a wave of bad publicity started when it partnered with the Royal Free in secret, bringing the app Streams to active use in the London hospital without being open to the public about what data was being shared and how,” the report noted.

Machine Learning

The other related areas is machine learning, something that is constantly bandied along with AI.

The Imperial College London is making some headway here, which deems machine learning as having huge potential. This month, Dr Marc Deisenroth from the department of computing at Imperial College London, and colleagues from across the college launched what they called the Machine Learning Initiative.

On the question of what is machine learning, an article at the college’s website noted that machine learning is when algorithms and methodologies give computers the ability to automatically learn and improve from experience without human intervention and without being explicitly programmed.

Machine learning automatically finds patterns and structures in data that humans cannot process easily in order to make predictions and decisions. The key emphasis is on “automatic”. No specific human guidance or expert knowledge is required. Machine learning algorithms can automatically adapt to evidence from data, which allows them to learn new concepts.

So, is machine learning a form of AI? On this point, the article said that machine learning can be considered the engine of modern AI.

“It provides the underlying technology that drives AI. AI is about complex systems that behave intelligently. In order to reach this goal, AI poses many questions, and machine learning provides the technologies toward answering these questions. In other words, AI is about systems and questions whereas machine learning is about practical solutions to these challenges. Another difference is that AI strives for intelligence, whereas machine learning does not necessarily do this,” it said.