The Future of Work

Nowadays, much discussed subjects are the future of work, the consequences of Industrie 4.0 with the combined AI Artificial Intelligence, thus TextileFuture would like to present the latest findings on these subjects in this Newsletter, these are based upon university research in the UK, as well as in the USA. In the second feature we comölementary present “Digital trends and observations from the late WEF World Economic Forum Davos 2018”

A third of UK jobs could be automated by 2030. Artificial intelligence, machine-learning and automation are already revolutionising the way we work, and their impact will continue to increase at an exponential rate.

The Future of Work special report, published in The London Times, features insights into the role of company culture in ensuring a prosperous future, the rise of the side-hustle, and game-changing advances in quantum networking that could see us communicating at the speed of thought. In addition, you’ll find an infographic showing the views of millennials on their working lives, the controversial issue of regulating robots and comment on whether UK workers will be able to afford so-called enforced leisure

It wasn’t supposed to be this way. In 1930, the economist John Maynard Keynes predicted we’d all be enjoying lives of unparalleled leisure by now, occasionally popping into the office between rounds of golf and sunny mornings on the allotment.

The premise of Keynes’ prediction was the speed of mechanisation. Fewer hands on the factory floor would mean more free time for workers. Yet, the modern era simply sees us cramming more work into the same nine-to-five day as always.

But could that be about to change? Such is the promise of the latest generation of automated technologies and self-learning machines. According to a report by independent UK research group, the Autonomy Institute, close to a third of UK jobs could be automated by 2030s, leading some to talk seriously of a post-work society.

The prospect of a better work-life balance certainly fits with the spirit of the times. Research by Timewise, a consultancy and recruitment firm, shows nearly nine in ten full-time employees say they either work flexibly already or that they would like to.

 

More time for leisure or study is at the heart of this desire for flexible working among almost a third of UK workers, says Daniela Marchesi, the firm’s campaign director.

“The demand for flexible working is huge,” she observes. “Our research busts the ‘mum myth’ too, showing that the desire [for a better work-life balance] is equally as strong in men, and that generation Y – those between 18 and 25 – are leading the charge.”

Extra leisure time isn’t just a potential boon for the overworked. It’s also a chance to make today’s workplace more equitable, with the underemployed and unemployed gaining a fairer slice of working hours on offer.

“By reducing the working week, we could see a fairer distribution of labour across society so that work is not thought of as being overbearing or, at the other end of the scale, a rare and precarious commodity,” argues Kyle Lewis, the Autonomy Institute’s spokesperson.

British workers shouldn’t give up on the dreams of a shorter working week just yet though. The impact of self-learning machines, artificial intelligence and similarly incipient technologies on working patterns is only just beginning to be felt. Be patient, says Geraint Johnes, professor of economics at Lancaster University and research director at The Work Foundation.

To date, the rise of the robots has been felt mainly in manufacturing industries. The service economy, in contrast, which employs four in every five British workers, is expected to be far less impacted.

But come the robot-inspired leisure revolution, it eventually will, Professor Johnes insists. The key question for him is how evenly spread it will be. He cites jobs such as lorry driving, which could be decimated by autonomous transport, pushing truckers into what he euphemistically refers to as “enforced leisure”.

“If we want to take advantage of the opportunities that machines give us to have more leisure then the ideal would be to have a fairly even distribution of the benefits,” he argues.

Such a fair distribution will almost certainly require government intervention of some kind. If people are to work fewer hours per week, then their incomes will drop. For those on low incomes to be able to afford more leisure time, then either their wages need to go up or state welfare needs to increase.

But challenges can be found at the higher end of the income spectrum as well. There needs to be a cultural shift in how we think about work and the status we afford it, says Anna Coote, head of social policy at the New Economics Foundation. She singles out for particular attention the cult of hard work and long hours, buttressed by the pervasive notion that “we are what we do”.

As she says: “It’s not that hard work isn’t good. Lots of people enjoy working hard. But work isn’t the only thing in life. We need to reclaim all the things we do when we’re not doing paid work, like friendships and caring for others.”

Note, she doesn’t say, “like jetting around the world”. In modern times, leisure has increasingly morphed into an act of consumption. Once, all a rambler needed was an old pair of boots and a stretch of nearby countryside; rebranded hikers, now they are not equipped without a full Gortex wardrobe and regular trips to far-away trails.

“Some hobbies can turn into a really very expensive and energy-intensive way of living,” Ms Coote says. “This may be affordable for those with the extra time, but it’s not sustainable if we’re to have the kind of planet we want for our grandchildren.”

Will Stronge echoes the need to rethink ideas of leisure, as well as work. The term leisure is often misconstrued in modern society, interpreted as a synonym for being idle, says Mr Stronge, also of the Autonomy Institute.

The truth is far from it, however. Many people have very precise ideas about how they would productively invest any extra time their jobs might allow, often showing a willingness to invest the kind of effort and determination they demonstrate at work, if not more.

“In a society where people could sustainably reduce their working hours, we would start to see individuals developing in fascinating, unforeseen ways, making use of their new free time as they see fit,” Mr Stronge says.

Automated technologies have huge time-saving potential, but a life of more leisure also depends on a host of cultural, political and employment factors. The ideal situation would be for people to elect for themselves how hard they wish to work. The ability to choose, after all, remains one of the defining lines between man and machine, however smart the latter may become.

We have to keep the bots under control

As artificially intelligent software robots, or bots, become faster learners and better at mimicking human behaviour, an augmented workplace is inevitable and poses ethical and policing challenges

At what point does a factory worker lose capacity because of a co-bot? What if a logistics artificial intelligence (AI) system is fooled by new data and makes a fatal error? What if board executives are duped by hostile chatbots and act on misinformation?

None of these instances of bot-gone-bad scenarios are science fiction fantasy. Don’t forget the Tesla driverless car that mistook a trailer for the sky, the racist Microsoft bot that learnt from bad examples and the chatbots that influenced the 2016 US presidential election. With 45 per cent of jobs forecast to be AI augmented by 2025, according to Oxford University research, alarmingly, policing the robots remains an afterthought.

https://www.oxfordmartin.ox.ac.uk/publications/view/2279

Alan Winfield, the only professor of robot ethics in the world, identifies the problem of technologies being introduced rapidly and incrementally, with ethics playing catch-up. “We need to build and engineer AI systems to be safe, reliable and ethical – up to now that has not happened,” he says. Professor Winfield is optimistic about workforce augmentation and proposes a black box with investigatory powers in the event of an AI catastrophe.

But not all AI is easy to police with some varieties more traceable than others, says Nils Lenke, board member of the German Research Institute for Artificial Intelligence, the world’s largest AI centre. Unlike traditional rules-based AI, which provides “intelligent” answers based on an ability to crunch mathematical formulae, learning algorithms based on neural networks can be opaque and impossible to reverse engineer, he explains.

Neural networks are self-organising in the quest to find patterns in unstructured data. The drawback, says Dr Lenke, is that it is impossible to say which neuron fired off another in a system composed of hundreds of thousands of connections learning from thousands of examples. “When an error occurs, it’s hard to trace it back to the designer, the owner or even the trainer of the system, who may have fed it erroneous examples,” he says.

Governments are beginning to tackle the complexities of policing AI (Artificial Intelligence) and to address issues of traceability. The European Union General Data Protection Regulation, which comes into force in May 2018, will mandate that companies are able to explain how they reach algorithmic-based decisions. Earlier this year, the EU voted to legislate around non-traceable AI, including a proposal for an insurance system to cover shared liability by all parties.

More work is needed to create AI accountability, however, says Bertrand Liard, partner at global law firm White & Case, who predicts proving liability will get more difficult as technology advances faster than the law. “With [Google’s] DeepMind now creating an AI capable of imagination, businesses will soon face the challenge of whether AI can own or infringe intellectual property rights,” says Mr Liard.

In the meantime, an existing ethical gap that needs fixing now is the lack of regulation requiring companies to declare their use of bots or AI. If a chatbot gives a reasonable response online, there’s a natural assumption that we are communicating with a fellow human being. “Without an explicit warning, as recipients we have no opportunity to evaluate them and can become overwhelmed,” says Dr Lenke, who is also senior director of corporate research at Nuance Communications.

His concerns chime with the findings of a 2017 report, by digital agency SYZYGY, “Sex, Lies and AI”, which found high levels of anxiety about the undeclared conversational or video user interface. More than 85 % of respondents wanted AI to be regulated by a “Blade Runner rule”, making it illegal for chatbots and virtual assistants to conceal their identity.  A cause for even greater concern, however, might be chatbots fronting an AI application capable of interpreting emotions.

https://think.syzygy.net/ai-report/uk

 

Nathan Shedroff, an academic at the California College of Arts, warns the conversational user interface can be used to harvest such “affective data”, mining facial expressions or voice intonation for emotional insight. “There are research groups in the US that claim to be able to diagnose mental illness by analysing 45 seconds of video. Who owns that data and what becomes of it has entered the realm of science fiction,” he says.

As executive director of Seed Vault, a not-for-profit fledgling platform launched to authenticate bots and build trust in AI, Professor Shedroff thinks transparency is a starting point. “Science fiction has for millennia anticipated the conversational bot, but what it didn’t foresee were surrounding issues of trust, advertising and privacy,” he says. “We are on the cusp of an era where everything is a bot conversation with a technical service behind it.”

Affective data harvested from employees could be used for nefarious and undercover purposes, says Professor Shedroff, who lists examples of employees inadvertently sharing affective data that collectively creates invaluable insider information and third-party suppliers collecting data they share or sell. GDPR (General Data Protection Regulation) does not cover affective data and companies are not aware or dealing with the threat. “We’re in new territory,” says Professor Shedroff.

While legislators and regulators crank up, businesses such as Wealth Wizards are not putting competitive advantage on hold. The online pension advice provider uses AI and plans to use chatbots, but complies with the Financial Conduct Authority, says chief technology officer Peet Denny. “Basically, anything that is required of a human we apply to our AI tools. It’s not designed for AI, but it’s a start,” he says.

If social media giants deem it necessary to police their algorithms, it matters even more for high-stakes algorithms such as driverless cars or medicine

In the absence of AI regulation and laws, a plausible approach advocated by UK Innovation charity Nesta is to hire employees to police the bots. In recent months tech giant Facebook has started to do exactly that, recruiting thousands of staff.  “If social media giants deem it necessary to police their algorithms, it matters even more for high-stakes algorithms such as driverless cars or medicine,” says Nesta’s chief executive Geoff Mulgan.

An industry that has used robots for decades and is now embracing co-bots is perhaps the best role model of how we should treat autonomous systems. Manufacturers are installing more intelligent robots on the factory floor and Ian Joesbury, director at Vendigital, anticipates a mixed workforce in the future. “Skilled technicians will work alongside a co-bot that does the heavy lifting and quality assurance,” he says.

Professor Alan Winfield, robot ethicist at the University of the West of England, has a background in safety-critical systems and believes artificial intelligence (AI) has much to learn from the sector.

In particular, he advocates the black box approach used by the aviation industry for investigating plane crashes. “It benefits everyone, including the AI industry, if services are tested and comply with a standard, and there is a mandated investigative procedure,” he says.

At the moment, driverless car companies do log data that is collected, primarily to improve assistance, notes Professor Winfield. But the logging of data is not mandated by governments in the way that flight data recording is legally required. In an air accident, the operator or manufacturer of the aircraft is legally obliged to hand over contents of the black box. “None of those things apply right now in driverless cars or other robots,” says Professor Winfield.

Ultimately, in the event of a disaster, humans are responsible agents and cannot hide behind algorithms, he says. “If someone is killed, you can’t stand up in a court of law and say ‘it is the algorithm’. If algorithms have consequences that cause harm, it is the humans who are responsible,” says Professor Winfield. “All AI should have a robot equivalent of flight data recorder.”

Nesta, the UK innovation charity, also believes an immature AI industry can learn from high-risk sectors which are effectively regulated. Two years ago, it proposed a human, regulatory institution, a Machine Intelligence Commission. “The field of human fertilisation has to work within an ethical framework to gain public support and has done a good job,” says Nesta’s chief executive Geoff Mulgan.

The British Standards Institute is also playing its part in ensuring safe AI and last year published BS 8611, the only ethical standard to date for AI design. Although voluntary, a standard makes a valuable contribution, says BSI head of market development Dan Palmer. “Unlike regulation, a standard is a living document that can be amended to respond to concerns or catastrophe,” he says. “BS 8611 has attracted interest around the world.”

Manufacturers are reviewing human resources practices in a new situation where a human works alongside a robot that never tires, Mr Joesbury adds. But the sector’s policing of current generation robots provides a graphical warning of how we should respect future AI. “Robots can be unpredictable in the way they respond to instructions,” he says. “You often see them caged on factory floors so they can’t hurt the human workers.”

https://www.thetimes.co.uk/

https://www.raconteur.net

 

Digital trends and observations from WEF Davos 2018  

The feature is presented by Nicolaus Henke and Paul Willmott from McKinsey, both are Senior Partners in McKinsey’s London office

Here are the trends and observations they made:

The massive snowfall in Davos this year certainly made getting around a little more challenging compared to years past, but that did nothing to dampen the conversation. We were fortunate to be at this year’s World Economic Forum, and after dozens of conversations with executives from around the world, we wanted to share a number of things that struck us about what we heard.

AI is growing up: Augmenting humans and social good

AI is top of mind for many executives, but the application of AI—and, more broadly, advanced analytics—is generating more thoughtful and nuanced conversations. While there are serious concerns about the social implications of AI, the reality is that it’s hard to see how machines can really be effective on their own, just as it’s hard to see how humans can work as well without machines. The most thoughtful organizations are looking to understand how AI can most effectively augment humans.

That idea of augmentation is playing through in other areas too. If you have good AI, you need processes to ensure the insights it generates are used. This is harder than it sounds. You can’t simply have a machine spitting out advice because people just won’t read it. By the same token, it doesn’t help to automate poor decisions. It’s all about finding ways to get the various technologies focused on what they do best, and then working together with humans to drive better results.

It was inspiring also to see how much focus there is on harnessing AI for social good. There is a significant opportunity for AI to help with big problems, from predicting the absence of rain in a region to managing mass immigration flows. While businesses are moving ahead quickly with AI, NGOs and regulators are far behind when it comes to the talent and capabilities needed. That may be changing, however. Increasingly there are courses on AI and social good being offered at cutting-edge technical universities, where there is strong interest from top students.

Gaining traction: Distributed ledgers (e.g., blockchain) and ecosystems

There is also a massive debate emerging around distributed ledger technology (more commonly referred to as blockchain, though that’s actually just one example of distributed ledger technology) specifically around its applications to businesses. There’s still lots of hype—often shaped by lack of true understanding of what the technology is—but also some real substance beyond its use for the cryptocurrencies that have been in the headlines. The promise of distributed ledgers lies in their ability to reliably, securely, and transparently access and share targeted sets of data.

Let’s take the example of sepsis, a dangerous but very preventable disease. Technology can help prevent sepsis by linking signals the body generates to historical health data. The analysis of this combined data could then signal danger signs before other symptoms arise and drive timely medical interventions. Distributed ledger technology could enable that kind of merging of data and analytics in a way that’s very hard to do today. Another example is banks that want to lend in emerging markets, where there is often no credit risk data, but widespread mobile phone usage. Through distributed ledgers, banks could access telco data to see potential customers’ phone bill payment records as a quick and reliable measure for loan suitability.

Distributed ledgers are also important for unlocking the cumulative power of ecoystems, which are increasingly a focus for businesses. It’s becoming clear to even the largest and most successful companies that they can’t do everything on their own. They are now concentrating much more on engaging in ecosystems of businesses, platforms, vendors, agencies, and the like through formal and informal partnerships, synergistic agreements, alliances, and other arrangements. However, ecosystems don’t happen at scale yet because of the difficulties getting different data systems to speak to each other with current technology. Distributed ledgers are the key ingredient to enable that level of communication and analysis.

Businesses are starting to put pilot teams together to understand how distributed ledgers work, and what the implications are for their businesses. We’re on the verge of some very interesting business models emerging from this.

Who’s got talent?

Almost everyone we spoke with mentioned how important the talent question has become. Of course, talent is always an issue but it’s now a CEO topic. There were three flavors of the talent challenge which we noticed:

“I need to get my hands on some quality data scientists.” There is a limited number of these kinds of people, so the competition is intense (and expensive).

“I need to train my senior people and managers to understand how to work with and lead these data scientists.”

“I need to do something about the percolating social implications.” Many leaders are concerned about the implications that displacement of jobs by automation will have on society. Added to that is the fact that much of the employment growth in Western countries is in the gig economy. Leaders are looking at re-skilling as a cheaper and more effective approach than paying to hire and train new people. But that then requires the development of the capacity to develop, administer, and adapt a constant training function, because the reality is that many employees will need to be constantly learning and adapting. That includes thinking through the skills needed in three to five years, and beginning to develop that now before it’s too late.

Bold moves and what they mean for the organisation

Many business leaders are thinking much more boldly about the changes they should make. One executive at an oil services business realized that they needed excellent advanced analytics capability to help manage their pipelines (such as for maintenance). His approach was to hire the best entrepreneur he could find and set up a self-standing business to specifically build out this capability. Not only did this executive believe it was the best way to build up an important capability quickly, it was also a talent play.

These bold moves are inextricably tied to organizational issues. Building out new businesses or figuring out how (or whether) to move to full-scale agile ways of working through the business raises all sorts of thorny questions: what does the governance look like? How do you make investment decisions? These are exactly the kinds of questions that reflect a deeper commitment to transformations at the core of the business.

The tough talk: Cybersecurity and looming “Techlash”

Overall, the feeling was very positive that the business outlook was good and the economy is flying. But below the surface there were very real and potentially damaging concerns. Cybersecurity is foremost among them, with companies locked in an arms race to stay ahead of (or even catch up to) highly sophisticated cyber criminals. It’s a big issue with CEOs and boards, and some of the business world’s best minds are trying to understand how to get the upper hand.

One other undercurrent of concern was around the idea of a “techlash,” or backlash against tech companies driven by fears that they are becoming too large and monopolistic. At one level is the basic concern that tech companies are just outcompeting incumbents, but beyond that there’s a sense that large tech companies are dictating terms to the marketplace, not taking privacy concerns seriously enough, and unfocused on the social implications of technology. Yes, to some degree this is driven by jealousy at the success these new tech businesses have enjoyed and the natural discomfort that comes with disruption. But there is also real concern as well with what’s happening to our society with these changes, and a sense that not all of it is good.

Despite the complexity of some of these issues and concerns, we were encouraged to see the discussion about them. Dialog is an indication of innovation to come.

www.mckinsey.com