The European Patent System at a Crossroads

This is the fourth piece in a week-long series on intellectual property. You can read the previous pieces herehere, and here.

The forthcoming Unitary Patent is high on the political agenda of European Union technocrats and policymakers. Half a century after the creation of the European Patent Convention (EPC), some major stakeholders claim to see the light at the end of the tunnel. Should we then expect an impact on innovation performance in Europe? The answer is not straightforward, and positive impact will only be felt if policymakers pursue significant efforts to put in place an effective and truly “European” system.

Long Lasting Issues with the Current System

There are currently two layers of patent systems in Europe. The first one consists of national patents, granted by national patent offices. The second one, created through the EPC, consists of a patent granted centrally by the European Patent Office (EPO). Once granted, the European patent must be translated, validated and renewed in each desired European country targeted for protection. On average, a patent granted by the EPO is validated in about six countries, as translation costs, validation fees and annual maintenance fees have to be supported in each market where protection is needed by the patent holder. The consequences of this fragmented system are analyzed at length here. The most important one relates to the prohibitive costs of maintaining a patent in force in several jurisdictions (Fig. 1). The prevalence of national jurisdictions, which are highly heterogeneous in their costs and practices, induces as well a high level of uncertainty and an intense managerial complexity. These high costs and the intense level of uncertainty—which peaks in case of multiple parallel litigations with different outcomes across countries—hamper the effectiveness of the European patent system in its mission to stimulate innovation. It is a system that implicitly favors big players, who have the resources to cope with it.

Figure 1. Cost Consequences of the Current System: Applicants Pay 5 to 10 Times More Than in Any Other Country for Ten Years of Protection

This two-layer system has been running for about 50 years, with few stakeholders actually willing to head toward a “unitary” patent that would be validated only once for (most of) the European Union market in a centralized and affordable system. Resistance to change is natural, especially when many stakeholders fear losing resources (attorneys, translators, and national patent offices have legitimate worries). Yet significant resources would be spared by the business sector, which could then be allocated to research and development projects. In-depth simulations show that, thanks to its attractiveness in terms of market size and a sound renewal fee structure, the Unitary Patent would drastically reduce the relative patenting costs for applicants (Fig. 2) while generating more income for the European Patent Office and most national patent offices.


Figure 2. Thanks to the Unitary Patent, the Relative Cost of a Patent in Europe Will Become Comparable with Those of Japan and the U.S.

Where Do We Stand?

The ratification by at least 13 member states (including Germany, France and the UK) is needed to enforce the unitary patent package, which includes the Unitary Patent and the Unified Patent court agreement. The latter sets up a court to address disputes on classical European patents and unitary patents. Three broad Courts are envisaged in Germany, France and the UK.

So far, as of late April 2018, about 15 member states (out of 28) have ratified the agreement (Austria, France, Belgium, Luxembourg, Sweden, Denmark, Malta, Finland, the Netherlands, Portugal, Bulgaria, Italy, Estonia, Latvia and Lithuania). The UK and Germany are still missing. The Unitary Patent package has actually been ratified by the UK, but not yet communicated. For Germany the ratification is ready but delayed due to a legal action filed against the UPC at the ECJ. Judges still have to be hired. In a nutshell, the Unitary Patent could enter into force with 15 to 18 countries around the end of 2018 at best but more probably in late 2019 – still an optimistic date.

Unfortunately several European countries do not seem to be ready to join the club. In particular, the Czech Republic, Hungary, Greece, Poland, and Spain seem to be willing to resist for a long time. Some of them rely on language arguments, others on the small size of their innovation sector, preferring to facilitate an imitation policy. A third argument is related to the fear of national patent offices to lose precious resources. The language argument is actually weak, as the current PCT system (an option to file patent in many countries at the global level) is already heavily used, with a vast majority of applications filed in English – hence small technology-based entrepreneurs must already understand English to investigate their freedom to operate, due to hundreds of thousands pending PCT applications.

What Do EU Stakeholders Expect?

The Unitary Patent package undoubtedly represents a major step toward the creation of a truly European patent system. Politically and symbolically, it will constitute an important stepping stone in the construction of Europe. This being said, one should not expect too much of an impact on real innovation efforts (a patent system is supposed to stimulate innovation), for two main reasons. First, the Unitary Patent will include only about 18 EU member states, with some important economies, such as Spain or Poland, not being part of it. Second, and foremost, the Unitary Patent constitutes a third layer of patent protection on top of the two layers presented above (the European patents and the national patents). By building a unitary system on top of the current systems we are actually making the patent system in Europe more complex and uncertain than in the past. Yet, the move must be done and the efforts sustained.

To build a truly European system that would support its innovation process, bold moves are still needed. For instance, the current European and national patents should enter a phasing-out project, so as to evolve toward one single layer, like in China, the U.S., Japan or Brazil. Second, the European Patent Office should be less independent than it currently is. In all important economies, patent offices are part of a political agenda, serving an industrial policy, with a coherent approach to innovation and entrepreneurship. Here we barely see a political leader, be it from national governments or the European Commission, which pretends to rely on the patent system to leverage its industrial policy, or who pretends to monitor it for the sake of European consumers, universities and entrepreneurs – those who would warmly welcome a single layered patent system in Europe.

How Much American IP Is China ‘Stealing?’

This is the third piece in a week-long series on intellectual property. You can read the previous pieces here and here.

The U.S. has been justifying its intention to impose tariffs on more than $150 billion in Chinese products by claiming that they are a response to China unfairly gaining access to American trade secrets.

U.S. officials say China aims to cheat its way to dominance in high-value technological fields such as renewable energy, telecommunications and artificial intelligence.

The blue-ribbon Commission on the Theft of American Intellectual Property estimates the total cost to the American economy resulting from IP theft as between $225 billion and $600 billion annually. The commission labels China the “principal IP infringer.” However, only a portion of the IP value lost to China is stolen outright, notes Paul Goldstein, an IP expert at Stanford. Instead, it’s given away in negotiations with Chinese businesses, officials and investors.

The Danger of Joint Ventures

In some industries, China has long required foreign companies seeking access to the domestic market to enter into joint ventures with domestic firms. U.S. officials and some businesses say Chinese officials flout trade rules limiting the amount of technology that the Chinese partner can receive in a joint venture.

Nearly one in five (or 19 percent) of American businesses in China say they have been directly asked to transfer technology to a Chinese partner, according to a 2017 survey by the U.S.-China Business Council. Sixty percent of those firms said they made the transfer only reluctantly.

Exhibit 1: How Did Your Company Respond to a Request for a Tech Transfer?

Source: U.S.-China Business Council 2017 Member Survey

Licensing Requirements

China also uses onerous administrative review and licensing processes “to force the disclosure of sensitive technical information,” according to the Office of the U.S. Trade Representative. Foreign companies must submit to reviews and licenses to expand or establish operations or to offer products in the Chinese market. Chinese officials use the occasion to directly or indirectly ask for technical information.

There’s some debate as to how much sensitive technology is being disclosed this way, but many foreign companies see reviews and licenses as one of the most significant barriers to doing business in China.

Exhibit 2: U.S. Companies Experiencing Licensing Challenges in China

Source: U.S.-China Business Council 2017 Member Survey

Tech Acquisitions

U.S. trade officials contend that the Chinese government is directing private companies to buy American technology companies in tune with the government’s industrial policy. The Office of the United States Trade Representative states the Chinese government directs Chinese companies to acquire and invest in U.S. companies to obtain cutting-edge technologies and intellectual property.

Chinese foreign investment in the U.S. has grown sharply over the past decade.

Exhibit 3: Chinese Investment in the U.S.

Source: Office of the U.S. Trade Representative

Increasingly, this investment has taken the form of acquisitions rather than “greenfield” investments, trade officials say. And it’s grown particularly quickly in the high-tech and innovation-heavy sectors targeted by Chinese industrial policies.

Exhibit 4: Chinese Investment in High-Tech U.S. Sectors

Source: Office of the U.S. Trade Representative

In many cases, Chinese buyers contend that they have no ties to the government. Regardless, the U.S. administration is increasingly seeking to block Chinese acquisitions by arguing that economic leadership in advanced technologies is tied to the national interest.

Theft by Hacking

According to the Office of the U.S. Trade Representative, for over a decade, the Chinese government has “conducted and supported” cyber theft of American trade secrets to gain competitive advantage, according to U.S. trade officials.

Quantifying the extent and source of cyberattacks is difficult, for obvious reasons. But the Center for Strategic and International Studies, working with McAfee, estimates the total cost of cybercrime in North America is between $140 billion and $175 billion, or up to 0.87 percent of GDP.

Recently the Bureau of Industry and Security asked U.S. businesses about the impact of malicious cyber activity from all sources, not just China.

Exhibit 5: Impact of Malicious Cyber Activity on Companies

Source: Office of the U.S. Trade Representative

Assuming that China is gaining unfair access to foreign IP, are tariffs the answer?

Perhaps not, says Mr. Goldstein, the IP expert. He suggests pursuing relief through the Chinese courts by seeking criminal or civil penalties under the Economic Espionage Act or through the dispute resolution process established by the World Trade Organization’s TRIPs (Trade-Related Aspects of Intellectual Property) Agreement.

“Addressing these discrete appropriations [of IP] with trade sanctions,” he says, “is like performing microsurgery with a sledgehammer.”

Corporate Intellectual Property Is Being Devalued by Washington

This is the second piece in a week-long series on intellectual property. You can read the previous piece here.

Not many years ago, business leaders concerned themselves with only a business’s tangible assets. Those days are long gone. Today, intangibles make up the majority of corporate assets, sometimes up to 80 percent. Intangible assets include patents, trademarks, copyrights and trade secrets—with patents alone comprising 20 percent to 30 percent or more of a company’s market value.

Savvy business executives now regularly monitor patent values. Business transactions involving a patent portfolio can affect stock values and have significant financial consequences.

Determining patent values is inherently challenging. Unlike stock markets, there is no open market for patents and thus no ongoing quantification of value. But the business leader’s job has become all the more difficult because Washington, like the proverbial “thief in the night,” has upended the system, vastly depressing patent values by as much as 60 percent in just the last three years, according to some economic studies.

In 2011, the U.S. Congress passed the America Invents Act (AIA), creating a unique procedure for cancelling patents, even after a patent is examined by experts at the U.S. Patent and Trademark Office (PTO). Under the AIA, any patent can be attacked—by anyone and at any time. Imagine if a neighbor—or even a stranger—could repeatedly challenge title to your home, year in and year out. Could you get a mortgage? Certainly not. The lender would run from the high risk.

Business investors, such as venture capitalists and hedge fund managers, and internal company managers are in the same boat as the mortgage lender. Why invest in a new product if the patent protecting it from competitive poaching can forever be challenged in AIA proceedings, especially when the rate of patent cancellation is so high?

Well, the AIA reviews have cancelled some of the best patents—those with high commercial value being enforced in expensive lawsuits—in all technology areas, including pharmaceutical, biotechnology, and computer technology. AIA reviews cancel patents at rates of 60 percent or higher. In courts of law, it is half of that. Why the difference? Simply put, AIA reviews require less evidence, and they never hear from live witnesses. Congress also allows anyone to challenge a patent—whether a competitor or not, a stock short seller, or merely a zealot campaigning against all patents.

The patent office aggravated Congress’ AIA design defects. Contrary to courts, the Patent Trial and Appeal Board applies a broad patent interpretation, called the “Broadest Reasonable Interpretation” (BRI). Despite the name, nothing is reasonable about it, particularly when it conflicts with the patent interpretation used by the courts. The BRI was intended for use during initial patent application examination, when the inventor could freely amend the claims, which is not allowed in AIA reviews.

The current U.S. patent environment is starving its own companies while funding those of major foreign competitors.

Sen. Christopher Coons, D-Del., the only patent lawyer in Congress, introduced a bill to correct these and other procedural deficiencies. But the bill has not been given a hearing. David Kappos, the PTO director when the AIA procedures were adopted, has explained that the intent was always to improve the rules, as the PTO learns from actual experience. The new PTO Director, Andrei Iancu, is currently reviewing the procedures, so we may see some relief in the near term.

Contemporaneously, the Supreme Court upended the settled law about what inventions are eligible for patenting. Between 2010 and 2014, the court undermined the breadth and clarity of the U.S. patent statute, imposing its own vague notions and ignoring the four classes in the statute: “processes, machines, compositions of matter and manufactures.” The court used undefined terms, such as “abstract,” leaving the business community and the patent office guessing what was meant. The result: Thousands of patents plainly eligible under those words were invalidated. Even worse, hundreds of thousands were cast under a cloud of possible invalidity, left with little or no value.

The impact of the court’s incursion compounded the massive uncertainty from the AIA reviews. Money managers “voted with their feet,” diverting funds from U.S. R&D into safer domestic investments such as entertainment and to overseas R&D. No wonder, for the scope of eligibility was broadened in Europe and Asia and even China, just as the U.S. narrowed it.

Not surprisingly, the annual Chamber of Commerce ranking of national patent systems shifted dramatically. From 2012-2015, the U.S. consistently ranked first. Then in 2016, the U.S. fell to 10th, tied with Hungary. In 2017, the U.S. dropped two more places. The U.S. dropped off the top 10, for the first time ever, in Bloomberg’s 2018 ranking of innovation systems.

So, the current U.S. environment is starving its own companies while funding those of major foreign competitors. Startups were hit particularly hard because they are more dependent on patents to secure funding. Their formation rate fell 40 percent to a half-century low, with more failing than being created in one recent year—a first in U.S. history.

This is especially worrisome because U.S. startups create the most net new jobs, the most economic growth, and the most technological breakthroughs, as documented by studies of the Kauffman Foundation on Entrepreneurship and the U.S. Census Bureau. I explained this to the U.S. Congress when I testified, but they seemed not to have noticed, instead creating AIA reviews that make it unaffordable for startups and small technology companies to protect their inventions.

This was ironic indeed, because the Congress had intended the reviews to be “an alternative to expensive litigation.” Instead, in many cases, the AIA is an expensive, sometimes insurmountable, obstacle to fighting against patent infringement. So, despite my testimony and those of others, no corrections have been legislated in the seven years since the AIA was passed.

Congress, the Supreme Court, and the PTO have all defaulted on their leadership responsibilities to protect the U.S. innovation economy. Their failure is dangerous because our main competitive advantage over foreign rivals is our innovation/invention system.

The future fortunes of both individual companies and our entire nation have thus been put at risk by Washington. Unfortunately, so far, corporate leaders have not been informed or active in pressing Washington to repair the enfeebled innovation system.

Why China is a Leader in Intellectual Property

This is the first piece in a week-long series on intellectual property.

United States President Donald Trump is not the first to complain about intellectual property theft by Chinese companies, but ironically it was U.S. companies’ use of China’s resources that led to the development of its powerhouse of patents.

In the late 1980s and throughout the 1990s, Western firms such as Apple and Intel made large profits by investing in China to take advantage of the cheap labor, often at terrific human cost. As China’s economy grew and the population became wealthier, Western firms were then able to profit by selling their products back to the wealthier children of the same labor force that made them.

The Chinese government saw this happening and wanted Western firms benefiting from the Chinese market to give something back. It established a system of approving foreign investments on the condition that the businesses involved agreed to partner with local firms and transfer knowledge and skills to the local Chinese market.

In December 2001, when China joined the WTO, it entered into the Agreement on Trade-Related Aspects of Intellectual Property Rights to bring its IP laws up to a minimum international level. At the same time, the government was keen to transition from being a manufacturing-based economy to an innovation-based economy. This large step forward (as opposed to a great leap) would be fueled by expanding China’s domestically owned intellectual property.

One of China’s more controversial growth tactics has been to focus on fostering IP innovation within China. For example, the government preferences procurement of high-technology products whose IP is owned or registered in China.

This has been called a strategic attempt to commercialize non-Chinese ideas in China and a trade barrier potentially contravening China’s WTO commitments, including those under the Trade-Related Aspects of Intellectual Property Rights agreement.

In 2010, the Obama administration filed a complaint with the WTO over China’s use of its innovation policies in the wind power industry. There’s been other complaints lodged relating to Chinese IP laws—one in 2007 notably argued that China has failed to enforce IP laws on pirated products, even when they had been identified by victims and/or the Chinese authorities.

Since the late 1990s, China has been steadily improving the quality of its IP protection and the standard of its IP law enforcement. Many of its preferential policies favoring Chinese IP development have been wound back so as not to discriminate against foreign IP—or at least not so obviously. Other amendments have strengthened IP protection and enforcement, as well as increased penalties for IP infringements.

There remain many deficiencies in China’s protection of trademarks, copyrights, and patents.

In March 2017, for example, the General Provisions of the Civil Law were amended to make clear that trade secrets can be protected under civil IP laws. Amendments to the 1993 Anti-Unfair Competition Law in early 2017 also improved protection for trade secrets.

China’s most recent 13th Five-Year Plan, approved by the National People’s Congress in early 2016, envisions China as a world leader in science, high-tech and intelligent machines:

We will … expedite the implementation of existing national science and technology programs. … We will move faster to make breakthroughs in core technologies in fields including next generation information and communications, new energy, new materials, aeronautics and astronautics, biomedicine, and smart manufacturing.

Perhaps the best example of China’s goal of becoming a global leader in artificial intelligence is in the area of facial recognition technology. These systems, which automatically identify an individual from a database of digital images, are now a part of everyday life in China in areas such as public security, financial services, transport and retail services.

This technology is also just one aspect of a broader system being rolled out by the Chinese authorities. It aims to monitor and influence the whole of Chinese society (individuals and organizations) through social credit ratings.

The global facial recognition industry is forecast to be worth $6.5 billion by 2021, and its continued growth in China is being spurred by innovative startups such as YITU Technology and DeepGlint.

China knows that an essential part of achieving its aim of science and intelligent technology leadership is putting in place high quality legal protection for intellectual property. However, as recent reports from the United States have found, there remain many deficiencies in China’s protection of trademarks, copyrights, and patents.

IP enforcement in the case of piracy and other breaches is often inadequate. Either there is no prosecution of breaches, no positive finding that a breach has occurred or the penalty applied is too light to have any deterrence value.

However, for firms that do take the trouble to properly register their IP in China, protection does exist and enforcement is improving and will continue to improve.

This piece was previously published in The Conversation.

Is Wearable Technology The Future of Safety Management?

The Internet of Things has significantly enhanced the efficiency and safety of our day-to-day lives over the past several years. With the click of an icon, connected devices let us see who is in our house, control the thermostat, give our dogs a treat, monitor the afternoon commute, count our steps, gauge our sleep quality and more.

At the same time, these same technologies are being used to improve workplace safety across industries. Consider the transportation industry, where cameras and sensors can now monitor a driver’s locations, driving habits, level of fatigue, and more. There is a growing opportunity and market for connected wearable devices to help prevent injury: Are employees lifting objects correctly? Are they walking in hazardous areas, such as under a crane or near toxic chemicals?

Wearables can collect many data points, such as motion metrics for specific parts of the body and environmental factors including temperature, heat index, and humidity. In general, the data collected will show what employees are doing correctly. What makes wearables so effective is their ability to help safety managers efficiently find the zebra in the herd of horses.

The Data Difference

One popular area to deploy wearables is around lifting, a key risk factor in on-the-job injuries in many industries. The data can show at what angle employees bend, the duration of the bend, whether they bend with a twist, whether they accelerate during the movement, and whether they accelerate while twisting. Safety managers can sift through this information to focus on the bends that are likely to cause injuries over time. By identifying these occurrences and corresponding processes, the safety manager can intervene and help the employee adjust to prevent the movements that may put them at risk.

The data from wearables can also determine which tasks employees are performing incorrectly and what factors may be contributing to poor performance. Fatigue, fitness level, skill level, and job design can all factor into the likelihood of injury. Collecting data through wearable devices allows safety managers to determine when an employee is performing a task improperly or whether the task is poorly designed. Ergonomic teams can then intercede and evaluate such variables as the height of workstations or the repetition of the task. The key is to prevent employees from developing a backache or lower lumbar injury—the No. 1 reported injury for workers’ compensation claims.

Companies with the most workplace injuries are the least likely to adopt innovations in health and safety strategies.

Three Levels of an Effective Wearable Program

It is not enough for wearable devices to only collect and report data to safety managers. An effective wearable device program will also provide immediate feedback to the employee so that they can adjust their process. An effective wearable device program integrates three levels of feedback and monitoring:

  1. Haptic response: The wearable device should be configured for the employee’s individual needs and be equipped with a noticeable haptic response that informs the employee they are doing something wrong and must correct it immediately.
  2. Scorecards: The employee and supervisors should be provided with their scorecard on a daily, weekly, monthly, and/or yearly basis. The ability to set thresholds by process is a key feature of a scorecard. This enables employees and their supervisors to track their improvement or lack thereof.
  3. Risk targeting: The system should allow the supervisor to see which employees have the most potential for injury or incident. This will allow the manager to intervene and provide those high-risk employees with training and other tools for improvement.

A wearables program won’t work if the technology is only implemented with one person in each department. The strength of wearable technology is that it provides safety managers with a measurable comparison. It is therefore important to use wearables with a large enough group to develop a benchmark, which will then allow safety managers to identify outliers. With a benchmark, managers can compare processes across an organization so the ergonomist knows where to start.


While it may seem that companies with the worst track records and the most workplace injuries would be the most eager to embrace wearable technology, they are in fact the least likely to adopt innovations in health and safety strategies. This is likely due to the fact that their safety problems are so systemic that they don’t know where to start.

Companies with good or great safety records are already leaders in the adoption of wearable technologies in the workplace. They understand the importance of the health and well-being of their employees and feel that they can still improve their safety programs. Additionally, their organized safety programs allow for them to easily integrate the technology into their existing strategy.

Another challenge for companies is that employees will likely be resistant to the technology until they are clear on how the data will be used. Many employees fear that the data will be used to punish or terminate those who perform poorly according to the device. Therefore, safety managers and other leaders need to be clear about how the wearables will be used and why.

The Future

Wearable technologies are developing at a rapid pace. The geo-positioning components can already alert employees if they are in banned or dangerous work areas, and in the not-so-distant future, geo-positioning may allow wearables to “talk” to machines and buildings. The wearable device of the future could warn employees of their proximity to an active forklift or limit the number of employees who can be near or engage with a specific machine.

Employees are an organization’s greatest asset. Using wearable technologies can help mitigate workplace injuries by reinforcing positive behavior and creating a culture of safety and efficiency. Wearables not only improve your employees’ safety, but they may also prove helpful in defending your organization against claims, ultimately protecting its bottom line.

Catastrophe Losses Not Scaring Off Alternative Capital

The record losses from the natural disasters of 2017—with current estimates of total insured catastrophe losses around $140 billion—provided a significant test for the decade-long rise of alternative capital in risk finance. Businesses and observers may now be wondering: Will the alternative financing that flowed into the insurance and reinsurance industry over the past decade flee?

The answer appears to be a definitive “no.”

Alternative capital, also known as convergence capital, comprises capital from insurance-linked securities managers, specialist reinsurance-sponsored managers, and generalist direct investors as opposed to more “traditional” insurance financing. Pension funds, sovereign wealth funds, and others have earmarked an estimated $1 trillion for investment in the insurance industry, according to Guy Carpenter & Company and JPMorgan Chase Asset Management.

Reinsurance companies historically have used a number of methods to develop their capital base, with alternative capital providing a portion in recent years. The benefits for organizations using alternative capital as a complementary form of risk transfer can include diversifying coverage, efficient and direct deployment of capital, competitive pricing, and dedicated underwriting.

Although losses from Hurricanes Harvey, Irma, and Maria triggered payouts from investors, data from Guy Carpenter show 9 percent more alternative capital entered the industry at the end of last year than in the previous year—and that’s after providers replenished lost capital. Those three major hurricanes accounted for 64 percent of global insured losses from natural disasters in 2017, according to Swiss Re. A previous test of alternative capital occurred in 2011, which saw $110 billion in insured disaster losses. Those losses, however, were largely non-U.S. based—dominated by the Tohoku, Japan, earthquake and tsunami and severe flooding in Thailand.

Since 2011, alternative capital has grown each year, accounting for an estimated $82 billion in 2017, nearly one-fifth of global reinsurance capital (see Figure 1). Traditional capital, meanwhile, has remained stable but has not grown.

Who’s Investing in Catastrophe Risk?

In the broad universe of alternative capital, the many players have different investment objectives. For example, private equity firms and hedge funds may seek double-digit returns and an exit after a few years. Pension funds, on the other hand, may only require mid-single-digit returns because they have a much longer investment horizon.

Pension funds are among the largest investors in alternative financing. These funds represent the world’s largest source of capital, accounting for more than $25 trillion in the 35 member nations of the Organisation for Economic Co-operation and Development. According to the OECD, 75 percent of pension fund assets are in equity and fixed-income investments. The OECD calculates that, in 2016, global pension funds made a weighted average return of 2 percent to 5 percent on their assets.

Alternative investments such as commodities and real estate and nontraditional securities such as catastrophe bonds are attracting interest from pension funds and other investors because their returns tend to have low correlation to other asset classes. Insurance-linked securities are attracting investors because they offer diversification, potentially higher yields and less volatility over time than traditional stocks and bonds amid low interest rates. Risks covered under ILS vary, but one of the most common over the past 20 years has been U.S. windstorm risk.

Outlook for 2018 Is Positive

What might 2018 hold for alternative capital backing of catastrophe risks? If there is another test of this form of capital, will the result be the same? Or if this year’s losses replicate those of 2017, will alternative capital go elsewhere? There are pundits on both sides, but from this vantage point, alternative capital flight seems unlikely.

One reason is the consistent reaction following major events, such as hurricane loss. The current annual seasonal hurricane forecast by the Colorado State University Tropical Meteorology Project projects that 2018 will see slightly above-average activity. The CSU team forecasts the probability of a Category 3, 4, or 5 hurricane making landfall in the U.S. as 63 percent for the entire coastline. The average probability during the past century was 52 percent. Where a storm makes landfall is a dominant factor in the amount of insured loss. Historically, the insurance industry has attracted investment following large-loss years.

Another reason is the room to deploy more capital. With $1 trillion earmarked for insurance risk and $82 billion invested in 2017, alternative capital providers have much more capital to allocate to insurance risk. Relative to the size of the global insurance and reinsurance marketplace, alternative capital participation represents a small percentage.

The future is uncertain. Catastrophe losses in 2018 may turn out to be heavy, or they may be light. What would be the financial impact if three major storms significantly greater than Harvey, Irma, and Maria struck the U.S. coastline in sequence or if another major natural catastrophe coincided with a single major storm? Fortunately, the global insurance industry—and its capital—has yet to experience that phenomenon in its core markets. Until it does, investor interest in supplying capital for insured risk is likely to continue to rise. If such loss events were to occur, a similar response appears likely as well.

We Need To Approach AI Risks Like We Do Natural Disasters

The risks posed by intelligent devices will soon surpass the magnitude of those associated with natural disasters. Tens of billions of connected sensors are being embedded in everything ranging from industrial robots and safety systems to self-driving cars and refrigerators. At the same time, the capabilities of artificial intelligence algorithms are evolving rapidly. Our growing reliance on so many intelligent, connected devices is opening up the possibility of global-scale shutdowns.

The good news is that natural disasters themselves, which Munich Re says caused $330 billion in economic losses globally in 2017, provide a template for how to mitigate the growing and catastrophic risk posed by AI. Like they have for extreme weather and natural disasters, companies can begin to establish international protocols and standards to govern AI within their own walls as well as in their relationships with other companies, insurers, and policymakers.

Intelligent Device Recovery Plans

Today, many companies are exposed to intelligent device risks that could harm both their own operations as well as their customers. Yet few have formally quantified the size of their revenue at risk and potential liability. Nor have they set up safety and security protocols for potential black swan AI events.

They should. Like the risks associated with natural disasters, companies cannot completely protect against smart-device risks by buying insurance; they must have worst-case scenario recovery plans. Managers have to figure out their higher- and lower-risk intelligent device vulnerabilities, add in redundant systems, and potentially set up the AI equivalent of tsunami early warning systems. In addition, they need the ability to switch to manually controlled environments in case artificially intelligent systems have to be shut down and to recall faulty smart products.

Contingency plans must go beyond a natural disaster playbook. Given the many potential points of connectivity, it will be much more difficult to predict, identify, and correct the cause of large-scale smart-device failures. Debugging and reprogramming a faulty intelligent device is even more complicated than creating a patch to fight against a malevolent cyberattack, because it can be unclear what rules the machines are following.

As a result, no company will be able to recover on its own. To rebound from the potential impact of a cascading set of global AI-related shocks, managers will have to consider the vulnerabilities that exist everywhere, from their suppliers to their customers. Addressing those vulnerabilities will require coordination across a large number of technology service providers and other companies that could catch or spread an AI infection to others, regardless of who is at fault.

Reducing the risks of more intelligent and interconnected networks will be difficult and costly. We can’t afford not to do it.

AI Insurance Products and Services

Insurers should quantify their exposure to a global intelligent device meltdown, offer new products, and advise companies and governments. Even with about $700 billion in capital available in the United States and hundreds of billions of dollars more around the globe, property and casualty insurers’ balance sheets are too small to cover all the potential losses from a global intelligent device disaster. But insurers can use data collected on losses across industries to advise companies and governments on how best to quantify their potential exposure to a worst-case scenario.

As they have for natural catastrophes, insurers can also encourage public sector safeguards. Since insurers cannot completely mitigate the outsized risks posed by extreme weather events, governments of many developed countries and international organizations provide natural catastrophe relief through government agencies such as the Federal Emergency Management Agency and public flood insurance programs. Insurers need to help mobilize similar public sector resources to help the potential victims of an AI-enabled smart device disaster.

In addition, they can start to advise clients on how they can enhance their safety and security protocols to head off the dangerous repercussions of an intelligent device meltdown. Today, some leading insurers are suggesting security procedures that companies could follow to attend to information breaches and interruptions in the event of a global failure of interconnected systems. But they should also begin to explore steps to address the potential of smart devices becoming even more sophisticated and potentially setting and following their own objectives.

AI International Protocols

Finally, policymakers should establish international trust and ethics guidelines to govern the development and implementation of ever more advanced AI products and systems. To reduce the future impact from natural disasters, governments and international organizations such as the American Red Cross and the World Bank collect and share data concerning the destructive ramifications and the support required to help victims. Similar intelligence will be critical to curb the impact of potential smart device shocks as artificial intelligence evolves and the number of connected IoT devices, sensors and actuators reaches over 46 billion in 2021, according to Juniper Research.

About a dozen governments, technology companies and international organizations such as the Institute of Electrical and Electronics Engineers and the World Economic Forum are starting to explore global AI trust and ethics protocols for retaining control of interconnected AI-driven systems and products. These forums are beginning to deepen understanding of the potential harm that intelligent devices could cause and the need for best practices. But much more has to be done.

Establishing the resources required to reduce the risks that will come with the world’s transition to more intelligent and interconnected networks will be difficult and costly. But we can’t afford not to do it, and our experience responding to some of the world’s worst “100-year storms” offers a valuable starting point for figuring out how to get ahead of potentially even more severe disasters. We just need companies, insurers, and policymakers to recognize that such efforts are an essential investment in our future.

This piece first appeared in Oliver Wyman’s Insurtech blog.

Confronting the Opioid Epidemic and Litigation Risk

The United States—along with much of the world—is confronting an opioid epidemic. According to the Centers for Disease Control and Prevention, 115 Americans die every day from an opioid overdose, and the CDC states that overdoses from prescription opioids “are a driving factor” in the increase in overdose deaths. Deaths from prescription opioids including oxycodone, hydrocodone, and methadone have more than quadrupled since 1999.

The economic and human costs associated with this epidemic are astronomical. A November 2017 report issued by the Council of Economic Advisers estimated the economic cost of the opioid crisis for 2015 at $504 billion, or 2.8 percent of U.S. gross domestic product that year. In addition to health care spending, criminal justice costs, and lost productivity due to addiction and incarceration, there are significant losses from fatalities based on standard value of a statistical life analysis.

And, while final statistics for 2017 are not yet available, fatalities continued to increase in 2016, with the National Institutes of Health reporting 64,000 total overdose deaths. Of these, 20,000 were related to synthetic opioids, including fentanyl and fentanyl analogs; more than 15,000 were heroin overdoses; and more than 14,000 were related to natural and semisynthetic opioids. The opioid problem is growing outside the U.S. as well, with opioid-related deaths increasing in Canada and surveys finding high nonmedical use of prescription painkillers among teenagers in Spain, the United Kingdom, Australia, and elsewhere.

For many businesses—particularly pharmaceutical manufacturers, distributors, pharmacies, and prescription benefit managers operating in the U.S.—the epidemic has also translated into significant litigation risk.

Confronting Opioid-Related Litigation

In the U.S., more than 500 lawsuits have been filed by states, counties, cities, Native American tribes, unions, and others. The suits typically allege, inter alia, that the defendants have:

  1. Oversaturated the market while failing to implement proper safeguards against misuse and diversion
  2. Engaged in deceptive business practices, making false representations about their products’ addictiveness and effectiveness
  3. Failed to monitor suspicious orders in accordance with the federal Controlled Substances Act

At this point, these are merely allegations. Many of the complaints have been consolidated in a multidistrict litigation known as In re: National Prescription Opiate Litigation, pending before Judge Dan A. Polster in the federal district court for the Northern District of Ohio.

Judge Polster has suggested that the parties negotiate a global settlement, which could resemble the settlement between four major U.S. tobacco companies and the attorneys general of 46 states in 1998. Under the terms of that agreement, the tobacco companies agreed to contribute a significant amount over 25 years to fund various educational and enforcement efforts and to recover tobacco-related health care costs.

There are, however, some major hurdles on any path to a global settlement. Unlike cigarettes, opioids have been shown to be beneficial for some people suffering severe and/or chronic pain. They are closely regulated for their safety and efficacy, and they are prescribed by physicians. This could thus invoke the “learned intermediary” defense, which allows a manufacturer to discharge its duty to warn consumers by informing a learned intermediary—for example, a prescribing physician—of risks associated with its product.

Any company with a connection to the manufacture, distribution, or sale of opioids should undertake an immediate review of its insurance coverage.

Meanwhile, the defendants themselves are not homogenous; they represent several different parts of the opioid supply chain. As such, they may not view themselves as equally liable—if at all—for the current crisis.

Despite these and other obstacles, litigation remains an avenue for government entities to address demands that government do something and to potentially secure returns through settlements and/or verdicts. One cannot overestimate the role that plaintiffs’ attorneys play as they seek legal fees that may be similar in amount to the estimated $30 billion in fees that attorneys earned in tobacco litigation. Since their attorneys are generally representing them on a contingency basis, plaintiffs’ investment in the litigation is relatively modest. Defendants, meanwhile, will incur significant legal expenses and may suffer reputational damage.

Managing Litigation Risks

It is unclear which course the consolidated litigation will take, but more lawsuits are likely to be filed—not only by governmental bodies but by other entities seeking to recover unreimbursed costs for treating addicts and overdoses—and more companies may become litigation targets.

Any company with a connection to the manufacture, distribution, or sale of opioids should undertake an immediate review of its insurance coverage. Risk professionals should be prepared to address any exclusions that insurers might seek to add at renewal.

Organizations should also engage experts in insurance recovery to identify and analyze all potentially applicable current and historical coverage and to assist in maximizing any potential insurance coverage for opioid suits. They should report claims properly and obtain an acknowledgment for all claims, scrutinize any insurer coverage positions, and pursue coverage under applicable policies.

Companies involved in litigation should consider coordination of information with advice from their counsel, providing insurers with regular updates on the progress of the litigation—for example, by organizing regular calls with insurers—while taking care to preserve privileged information.

Litigation related to opioids has been ongoing since the early 2000s, but the flood of government suits is a relatively new phenomenon and may not be reflected in current insurance policies. Risk professionals should be attentive to insurers’ efforts to impose exclusions for opioid litigation and work to make sure that these exclusions are not broader than necessary.

Many companies in the opioid manufacturing, distribution, and sales chain have taken steps to prevent diversion of opioids, curb sales to suspicious entities, and improve monitoring and reporting of sales. Litigation usually targets past events and practices, but adopting best practices can deter future lawsuits and reinforce an organization’s presentation of its own case to litigants and insurers.

The opioid epidemic is an extraordinarily complex problem with no “silver bullets.” As governments and other entities seek restitution for the high costs imposed by addiction, treatment, and other effects of opioid abuse, potential targets should take steps to protect their own interests—by not only working closely with advisers to maximize their insurance coverage but also making sure that applicable rules and regulations are followed.

Keeping Pace with Innovation Risks and Opportunities

Technological innovations are a driving force in today’s interconnected business world. Along with the potential to both create corporate growth and contribute solutions to societal issues, technology brings with it increased risks. Consider that The Global Risks Report 2018 from the World Economic Forum listed adverse technological consequences as one of the top four risks in terms of both likelihood and impact, with cyberattacks also among the top four for impact (see Figure 1).

Figure 1. Risks With the Greatest Change in Concern, 2016-2017

SOURCE: The Global Risks Report 2018

To manage the potential risks from innovation while taking advantage of the opportunities, companies need to understand how the various disruptive technologies are being used in their organizations and in their industries. We are reminded by a near daily drumbeat of news stories about failures and missteps of how essential it is to have effective procedures in place when deploying and managing new technology.

And yet, in this year’s Excellence in Risk Management survey, we found that nearly half of respondents could not say that their organization had a clear technology risk management process (see Figure 2). And only 14 percent expressed strong confidence that such a process was in place. Clearly there is work to be done.

Figure 2.

Seventy-five percent of respondents said one of their organization’s goals is “to become more digital” (see Figure 3). When we dug further to find out what that means, we broke the responses into two areas: those that related to operations and efficiency, and those that related to growth.

There was a clear orientation in responses toward operational improvements, such as delivering goods faster and automating core processes. There’s no question that such improvements are valuable. For example, many industries—insurance being one—are built in part on masses of paper, scanned images, and PDFs, or what data analysts refer to as “unstructured data.” Efficient machine learning algorithms require such data to be structured, which can be done with natural language processing and other advanced technologies that will soon become prevalent.

But at the same time, it’s important to keep an eye on ways in which digitization will change the way companies interact with customers—thus changing the nature of their risks.

Today’s companies must ask: What new markets are we looking to open? How are we positioning the company for growth? The important link is the shift in risk profiles, which are changing at an accelerating pace. Risk executives must lean in to these changes. They should drive internal conversations to help understand the implications of new business models. And they should deploy an analytical decision-making framework that ensures the risk finance approach is optimized against an ever-changing risk profile.

Figure 3.

So we see companies continuing to digitize and trying to manage and accomplish more with an ever-increasing cascade of data. Our survey, now in its 15th year, has long found the sheer volume of available data for risk management to be a source of both opportunity and consternation. In the 2017 Excellence survey, for example, the “inability to model the magnitude of the risk” was the most commonly cited barrier to organizations’ understanding the impact of disruptive technology risks. At the same time, improving the use of data and analytics was the No. 1 focus area for developing risk management capabilities.

So it’s no surprise that this year’s respondents said they are looking for technologies to help sort through the chatter in the data (see Figure 4). They want to be able to use data to see the risks on the horizon, inform their response when a crisis arises, and help them refine risk finance.

Figure 4.

With disruption being the new normal, risk professionals will be increasingly sought out to contribute to their organizations’ strategic decisions. A failure to develop the needed insights and connections could put the risk function in the background as their organizations move ahead. Fortunately, both the desire and the talent to play a leading part are there.

The World Needs To Build More Than 2 Billion New Homes Over the Next 80 Years

By the end of this century, the world’s population will have increased by half—that’s another 3.6 billion people. According to the UN, the global population is set to reach over 11.2 billion by the year 2100, up from the current population, which was estimated at the end of 2017 to be 7.6 billion. And that is considered to be “medium growth.”

The upscaling required in terms of infrastructure and development, not to mention the pressure on material resources, is equivalent to supplying seven times the population of the (pre-Brexit) European Union countries, currently 511 million. With the global population rising at 45 million per year comes the inevitable rise in demand for food, water and materials, and perhaps most essentially, housing.

Housing Needs Are Changing

Average household sizes vary significantly by country and between different continents. According to the UN, recent trends over the last 50 years have shown declines in household sizes. For example, in France, the average household size fell from 3.1 persons in 1968 to 2.3 in 2011, the same time the country’s fertility rate fell from 2.6 to 2.0 live births per woman. In Kenya, the average household size fell from 5.3 persons per household in 1969 to 4.0 in 2014, in line with a fertility decline from 8.1 to 4.4 live births per woman.

Increasingly aging populations, particularly in developed countries, are causing a demographic shift in future care needs, but this trend also means that people are staying in their own homes for longer, which affects the cycle of existing housing becoming available each year. One of the most marked changes has been the rise in one- and two-person households in the UK and other developed countries.

Statistics published by the National Records of Scotland, for example, reveal the influence of these changing demographics, with future household demand rising faster than population growth. By 2037, Scotland’s population growth is forecast to be 9 percent, with growth in the number of households forecast to be 17 percent. This 8 percent difference is, in effect, the household growth demand from the existing population.

In England, between now and 2041, the population is expected to increase by 16 percent, with projected household growth at 23 percent, resulting in a 7 percent difference in demand.

Globally, housing supply will soon become the most pressing issue facing governments this century.

As people live longer and one- and two-person households increase, the number of future households required rises faster than the population. In 2014, urban-issues website CityLab dubbed the situation the “population bomb.”

As more developing countries deliver infrastructure and progress similar to developed countries—improving the standard of living and extending life expectancy—household sizes will decrease, placing greater demand on supply of new housing. If this difference between household demand and population growth occurs globally at around 7-8 percent over the next 80 years, this will require an additional 800 million homes.

Taking an average global three-person household (1.2 billion homes) coupled with that 8 percent demographic factor of total global population over the period results in a need for more than two billion new homes by the end of the 21st century.

Meeting the Demand

The current and future demand for new housing is compelling governments to push for further innovations in “offsite”—prefabricated—construction to speed up the supply of new housing. The UK Industrial Strategy published in November 2017 has a strong focus on offsite construction for the future. This sector has grown rapidly over the last decade with new markets in health care, education and commercial buildings. But for prefab construction, delivering more houses at a faster rate means looking at alternative solutions to the problem.

Issues that slow down the rate at which prefab houses are built include the lengthy preparation time required for substructures and foundations; delays to the installation of utilities and building services; and a lack of well-trained construction-site managers capable of delivering the complex logistics involved. With more than 65 million people displaced by man-made and natural disasters globally, this puts further pressure on countries unable to supply enough new housing as it is.

The issue of availability of materials to meet the demands of constructing two billion new homes emphasizes the need for countries to resource them as efficiently as possible. Government policies that encourage the sustainable design of new buildings to maximize future re-use, reduce carbon emissions and manage resources properly will be essential. Over the next 30 years, the countries that promote policies to help sustain and increase new housing provision will be more likely to avoid problems in sourcing materials and price hikes.

For many countries, housing supply is now a hot topic for national debate and policy strategy. For the rest of the world, it will soon become the most pressing issue facing governments this century.

This piece has been previously published in The Conversation.

Executive Compensation in the Age of Populism

Executive compensation and corporate governance are front-and-center topics for professionals in the U.S., Western Europe and elsewhere around the world.  

Additionally, a focus on compensation committees has been increasing since 2011, when shareholders first had the right to vote on a company’s executive compensation program as described in the Compensation Discussion and Analysis (CD&A) section of a company’s proxy statement. This right, a so-called “say on pay,” gives shareholders and their advisers greater influence over executive pay and governance matters and raises the profile of compensation committees.

The CD&A section of the proxy statement includes narrative disclosure of the objectives and policies of the executive pay program, and the Compensation Committee Report includes the committee’s signoff on the CD&A.

BRINK spoke about developments in this area with Teresa Bayewitz, a principal in Mercer’s Career business in New York, and Gregg Passin, a senior partner in Mercer’s New York office and Mercer’s North American Leader for Executive Rewards Consulting. Ms. Bayewitz and Mr. Passin shared insights on executive compensation in the age of populism and shareholder activism, an increasing focus on executive pay in the context of organizational performance, and recommendations for companies that find themselves under the spotlight on these and other issues.  

BRINK: How concerned are corporate boards with the issue of executive compensation?

Gregg Passin: They’re highly concerned. Certainly the board compensation and remuneration committees, the committees with whom we work, that’s their focus. But I would say in general, executive compensation and corporate governance are very hot topics globally, in the U.S., Western Europe, really globally.

It’s a critical issue, both in terms of attracting and retaining the executive talent that organizations need to be successful, as well as to motivate the performance that the company needs—that’s the internal perspective.

The external perspective is obviously the level of executive compensation gets a lot of press, and in the age of populism and shareholder activism that we live in today, the optics of executive compensation and corporate governance are sharper, perhaps, than they’ve ever been.

BRINK: How has the scrutiny affected the trend in executive compensation?

Teresa Bayewitz: I don’t think we can draw a direct correlation or an inverse correlation just yet. I think there’s a lot more caution and a lot more thought that’s going into setting executive pay and setting performance goals and objectives at the board level so that they don’t become the next poster child for bad corporate governance or outsized executive pay packages.

According to the National Association of Corporate Directors survey, 76 percent of boards said they had discussed whether their compensation practices are driving the right behaviors. So, the discussion at the board level has moved from not just approving an absolute level of pay, but that additional level of concern about whether the incentives in place in the organization are driving the right behaviors for the long-term health of the company.

BRINK: Do you think that we’re seeing a change in the culture of these companies about what executives should be given in terms of compensation?

Ms. Bayewitz: There’s certainly more emphasis on linking executive pay to performance of the organization than there’s ever been. And with that comes the emphasis on making sure you’re incentivizing the right behaviors.

Design an executive compensation program that will help you achieve your business strategy—and communicate that to shareholders and stakeholders.

BRINK: For example?

Mr. Passin: For example, a bank might encourage its employees to cross-sell across different offerings, which, on the face of it, sounds innocuous and a metric that many companies might have. But what has happened is that employees have opened up fictitious accounts for new products for existing customers. If somebody had a savings account, an employee internally might have also issued them a credit card without telling them, just to get their numbers up.

Now, cross-selling as a metric on its own isn’t necessarily bad. You just need to have processes in place to make sure the accounts that are being opened are for real, and that it’s really what customers need.

BRINK: There’s a lot of talk these days around inequality. Do you think companies are seeking to narrow their own gap between what executives are paid and what employees are paid?

Mr. Passin: A tough question to answer. In the U.S., we’re just now seeing the results of the disclosure of the CEO pay ratio, which is required under the Dodd-Frank Act, which shows the ratio of CEO pay to the pay of the median employee in the organization. So, we’re just starting to get the beginning disclosures on that.

From the U.S. perspective, I don’t think that this is going to have any significant impact on levels of executive compensation. Similar types of requirements are or will be required shortly in Western Europe, and there it might have a little bit more impact on executive compensation levels than here in the U.S. Here, it’s more about disclosure.

Ms. Bayewitz: There is also, I don’t want to yet call it a trend, but an idea that’s floating out there that corporations do need to do more to be socially responsible for society. There was a letter that was written by the CEO of BlackRock that was circulated widely among CEOs of U.S. companies excoriating them about their obligation as leaders of corporations to do more on a variety of factors, environmental, social, but also income inequality. So, there’s maybe a little bit of peer pressure happening as well.

BRINK: Another big area is obviously gender inequality—has that played into this whole issue of executive compensation?

Ms. Bayewitz: A number of compensation committees, because they are reading the same media that we’re all reading, have been asking management, “What are we doing about gender pay equity?” And we certainly see from our clients, where we’re dealing with the compensation committee, that they’re coming to ask us for help to assess this as a broader employment issue.

To be honest, it’s a risk issue. Nobody wants to be either defamed or sued for doing something that’s really not right. And also, you just don’t want to be doing something that’s not right. It really goes beyond gender—a lot of the clients that we are helping with this analysis of their gender pay inequity are also looking at race and ethnicity.

BRINK: What are your recommendations for companies that are under the spotlight from populism, from shareholder activism, to get this right?

Mr. Passin: First of all, to understand the business strategy: What your organization needs to do to be successful and grow over the long term. That’s the first thing. Compensation programs, for the entire organization, but certainly the executive compensation program, need to support that business strategy. That comes first.

Develop and design an executive compensation program that is going to help you achieve your business strategy, your business goals, as best as possible, and communicate that to your shareholders, to your other stakeholders, the best way you’re able to.

Show them the connection between the compensation program, the performance metrics, the goals that are set, and your business strategy—why it makes the most sense to enable you to attract, and retain, and motivate the executive talent that is needed for the organization to succeed.

What Keeps Board Directors Awake At Night?

Corporate directors are worrying about industry disruption and short-term thinking, according to the 2018 survey of more than 1,000 U.S. directors and executives by the National Association of Corporate Directors.

By a wide margin, directors cited significant industry change as the trend with the greatest impact on their companies, followed by business model disruption and changing global economic conditions.

Concerns about cybersecurity and competition for talent rounded out the top five concerns. Respondents’ concerns about regulatory burden dropped sharply, from 58 percent in 2016 to just 29 percent. This comes as no surprise given the policies of the Trump administration, but the NACD points out that regulatory burden appears to be increasing in the EU and China.

Exhibit 1: Trends Affecting Directors’ Companies

Source: 2017-2018 NACD Public Company Governance Survey

Pressure for Short-Term Performance

Boards are still facing pressures to achieve short-term performance. Seventy-four percent of respondents said that their management’s focus on long-term strategic goals has been affected by pressure to deliver short-term results. Directors that have been approached by activist investors report facing even greater pressure to meet short-term goals.

Exhibit 2: Impact of Short-Term Pressures

Source: NACD

Understanding Risk Is Key

An atmosphere of risk and uncertainty is driving directors to seek a deeper involvement in strategy. Just over half of board respondents are confident that managers can appropriately address growing geopolitical risks—but half also say that there’s not enough time during board meetings to discuss strategy in depth.

Exhibit 3: Board’s Involvement in Strategy

Source: NACD

Ignorance about Corporate Culture

Board understanding of corporate culture doesn’t extend beyond the tone at the top, “creating a risky disconnect,” the NACD reports. While 79 percent of directors express confidence in management’s ability to sustain a healthy culture during performance challenges, 92 percent rely on the CEO for reporting about the health of the culture.

A much smaller fraction of board members hear directly from specialist functions that could bring an independent perspective, such as internal audit (39 percent), compliance and ethics (30 percent) and risk management (20 percent). Not surprisingly, while most directors say they understand the health of the culture at the top, a much smaller number understand the perspectives of middle or lower management.  

Exhibit 4: Health of the Organizational Culture

Source: NACD

Less Confidence in Cyber-Risk Preparedness

Just 37 percent of board members said they felt confident or very confident that their company is properly secured against a cyberattack. That’s 5 percent less than last year. The authors speculate that directors are getting more savvy about cybersecurity, “which may explain their increased skepticism of management’s efforts.”

Exhibit 5: Security against Cyberattack

Source: NACD

CEO Succession Planning

2017 was a big year for unexpected firings and departures of high-profile CEOs. That may help to explain why more directors are giving priority to improving CEO-succession planning. There was also a large increase in discussions about CEO and executive-team successions with investors.

Exhibit 6: CEO Succession Planning

Source: NACD

Similar to last year, very few boards cited social and environmental issues as top trends that will impact business performance. For example, just six percent of respondents selected climate change as a top-five trend. Given the growing range of issues and challenges facing boards however, it’s not surprising that board evaluations are becoming increasingly rigorous. Sixty percent of respondents said that they now evaluate the performance of individual directors—a sizable jump from forty one percent last year.

What Are the Targets in the U.S.–China Trade War?

The U.S.–China trade war is escalating faster than expected, but the real question is: What are the ultimate targets for the two countries? To answer this question, we analyzed the 1,333 products covered by the U.S.’s latest action against China’s breach of intellectual property rights and classified them based on two criteria: the technological content and the weight in China’s total exports to the U.S. We then applied the same methodology to China’s second round of export tariffs announced this week and compared the two.

We found that the U.S. tariff package appears to be much smaller than the estimate produced by the U.S. administration of $60 billion, based on our bottom-up estimation of the export value of the 1,333 products (Chart 1). However, 84 percent of the total value of the products are high-end exports, while the low-end ones only constitute 3 percent of total value (see Chart 2).

As for China’s list of targeted imports from the U.S. (106 in total), our estimated value is almost the same as the one announced by the Chinese government ($50 billion). However, 50 percent of the products on China’s list are at the lower end of the value chain.

Automotive Industry the Most Affected

Our estimation of value of imported products covered by the tariffs of $29 billion is equivalent to 25 percent of U.S. imports from China.

Among all products, there are three key targets. Automobile is the most affected sector, and equal to 7.7 percent of U.S. exports into China. Advance instruments (optical, measuring and medical instruments) and nuclear reactors, machinery and mechanical appliances constitute 6.4 percent and 6.0 percent, respectively.

High-tech categories tend to have a higher proportion of products subjected to import tariffs, varying from 44 percent to as high as 90 percent. For the most extreme case, 90 percent of optical, measuring and medical instruments will be affected. Eighty-one percent of vehicles and 70 percent railways exports will also be subjected to additional tariffs, which are the sectors that China is trying to grow both in global market share and technological advancement.

The list also includes sectors in which China is still struggling on the technological ladder, in particular arms and aircrafts. We have discussed in our previous report that China’s “Manufacturing 2025” program is a key focus. The goal is not trade balance but technological advancement.

For China, It Is About Scale

China’s retaliation so far appears to be aimed at hurting U.S. exports in terms of scale, and this includes large items such as soybeans, aircraft and vehicles that amount to 29 percent of U.S. exports into China. The main targeted agricultural products (including soybeans and cereals) account for 11.6 percent of China’s imports from the U.S., followed by 9.4 percent in aircrafts and 8.9 percent in vehicles.

More than 50 percent of mineral fuels and plastics articles, which are also of some importance in imports, are also subjected to tariffs. Semiconductors, the fourth largest item in China’s imports from the U.S., have been exempted probably due to the irreplaceable nature and the urgent need of such technologies. In contrast to the U.S., China’s list covers more primary products than high-tech products, and therefore the impact will be more direct but short-lived.

Chinese Tariffs Will Impact China Too

The fact that China has high reliance on the above products also means tariffs could hurt Chinese producers and consumers. Beyond semiconductors, automobile makers, which export about 30 percent of their products into China, will be subjected to another 25 percent tariff on top of the existing 25 percent. However, at the same time, 28 percent of China’s imported vehicles originate in the U.S. Interestingly, Premier Li Keqiang promised to cut the tariff on automobiles during the People’s Congress, suggesting that a tariff on vehicles is clearly not desirable for China.

The same can be said for soybeans and aircrafts, as China buys 41 percent of imported soybeans and 63 percent of imported aircraft from the U.S. The recent fall in pork prices could offer room for China to retaliate through soybeans, a key source of piglet feed, and China could also switch to non-U.S. companies to meet the demand in aircraft, but whether this is sustainable is another question.

Due to the large scale of purchases by China, it may not be that easy to substitute with new sources based on constraints in infrastructure and productivity in the short run. Higher inflation and the lack of competition on mechanizing could also mean higher costs for China.

Stopping China from Gaining a Technology Advantage

Our interpretation of these findings is that the U.S. is not really targeting the bilateral trade deficit with China, but trying to constrain China from climbing higher up the technological ladder.

The U.S. is hitting China where it hurts the most, especially as technological modernization is officially enshrined in China’s Manufacturing 2025 plan, hence the strong retaliation by China. However, it seems clear that China is trying to minimize the self-inflicted cost of retaliation by focusing, to the extent possible, on lower-end products.

In other words, both the U.S. and China are targeting each other’s weakest points—the U.S. is targeting China’s future technological capacity, and China is responding by targeting U.S. exporters’ present revenues. The instantaneous impact on U.S. exporters could well explain President Donald Trump’s immediate response with an even larger package of import tariffs. In such circumstances, it is hard to think of a way to negotiate.

How Should Business Handle the Changing Nature of Terrorism?

Terrorism remains a persistent and significant threat to businesses, governments, and individuals. Fewer people were killed by acts of terrorism, insurgency, and politically or ideologically motivated violence in 2017 than in 2016, but the number of incidents is still very large – and the means of attack have shifted. As such, it’s critical that businesses take stock of the strategies available to them to manage and finance that risk.

Shifting Threats and Costs

Marsh’s 2018 Terrorism Risk Insurance Report, prepared with support from Guy Carpenter, explores terrorism trends, the state of the terrorism insurance marketplace, and mitigation strategies for global businesses. Among the report’s key findings:

  • Acts of terrorism have increasingly come against soft targets and been perpetrated by “lone wolves” and small groups with no direct connection to known terrorist organizations, while past attacks were carried out primarily by specific groups against high-value and high-profile targets.
  • Weapons of choice now include vehicles, knives, and other handheld devices, and they could include ransomware and other destructive cyber tools in the future.
  • Actors backed by nation-states launched destructive ransomware attacks in 2017, raising the prospect that similarly destructive cyberattacks could soon be carried out by terrorists.

if("undefined"==typeof window.datawrapper)window.datawrapper={};window.datawrapper["I3yo7"]={},window.datawrapper["I3yo7"].embedDeltas={"100":986,"200":664,"300":582,"400":563,"500":519,"700":500,"800":500,"900":481,"1000":481},window.datawrapper["I3yo7"].iframe=document.getElementById("datawrapper-chart-I3yo7"),window.datawrapper["I3yo7"]["I3yo7"].embedDeltas[Math.min(1e3,Math.max(100*Math.floor(window.datawrapper["I3yo7"].iframe.offsetWidth/100),100))]+"px",window.addEventListener("message",function(a){if("undefined"!=typeof["datawrapper-height"])for(var b in["datawrapper-height"])if("I3yo7"==b)window.datawrapper["I3yo7"]["datawrapper-height"][b]+"px"});

Impact on Business

In addition to direct property damage and injury to employees, these attacks can have significant indirect effects on businesses. These include:

  • Supply chain disruption and security costs: Terrorist groups carried out nearly 350 attacks on global supply chains in 2016, an increase of 16 percent from 2015, according to BSI Supply Chain Services and Solutions. For example, stricter controls along France’s borders following the November 2015 Paris attacks cost companies an additional $59 per delayed vehicle.
  • Lost revenue: Terrorist attacks in Western Europe in late 2015 and early 2016 cost European airlines $2.5 billion in lost revenue in 2016, according to the International Air Transport Association.
  • Consumer confidence: Although U.S. consumer confidence increased in the third quarter of 2017, terrorism was cited as the top concern for 21 percent of consumers, according to The Nielsen Company.
Risk Financing Options

The most common way for businesses to manage terrorism risk is to purchase property insurance, which can reimburse companies for costs stemming from physical damage and business interruption resulting from acts that are motivated by politics, religion, or ideology.

In 2017, 62 percent of U.S. businesses purchased property terrorism insurance, according to Marsh data.

Purchasing, or take-up, rates across all industries have generally stayed close to 60 percent in recent years, but in 2017, rates varied by industry, geography, and company size. Take-up rates for terrorism insurance were higher for larger companies; 67 percent of companies with $500 million or more in total insured values purchased terrorism insurance, and those companies also allocated more of their property insurance premiums to terrorism coverage than smaller companies.

Geography and Sector Matter

By industry, education entities, health care organizations, financial institutions, and real estate companies had the highest take-up rates, each exceeding 70 percent. This is due in large part to the sizable presence that organizations in these industries have in central business districts and major metropolitan areas that insurers perceive to be at a higher risk of terrorist attacks. For similar reasons, companies headquartered in the northeastern U.S. also purchased terrorism insurance at a higher rate than companies in other regions.

As an alternative to commercial insurance, some businesses choose to self-insure their terrorism risks through captives, which are insurance companies they own or can rent. For captive owners, the cost of implementing terrorism insurance programs often compares favorably to the cost of buying from commercial insurers. Captive insurers can also generally offer broader coverage than commercial insurers.

The Importance of a Government Role

In the U.S., insurers benefit from reinsurance protection in the event of a sizable loss through the federal Terrorism Risk Insurance Program. First established in 2002 following the September 11, 2001, attacks and most recently reauthorized via the Terrorism Risk Insurance Protection Reauthorization Act of 2015 (TRIPRA), this backstop has helped to keep property terrorism insurance costs low and widely available for buyers. The U.S. is one of more than 20 countries in which local terrorism insurance pools or government reinsurance mechanisms are available.

Local pools continue to evolve to meet the changing needs of businesses. For example, both the U.S. backstop and the UK’s Pool Re now provide reinsurance protection for cyber-insurance policies. Pool Re also plans to provide coverage for nonphysical damage business interruption losses in the future.

The U.S. federal backstop remains especially important to continued market stability and health. Absent TRIPRA, which expires December 31, 2020, there is not sufficient insurance and reinsurance capital available to provide comprehensive terrorism coverage to U.S. insurance buyers.

As congressional representatives evaluate potential options ahead of TRIPRA’s expiration, they will likely focus on trying to expand the private insurance market role in managing conventional acts of terrorism while still providing a critical backstop for large-scale and unconventional attacks.

Modeling Terrorism Risk

To make appropriate decisions on how to finance their terrorism risk, businesses must first understand that risk.

Since terrorism risk models were first developed in 2002, insurers, reinsurers, and modeling companies have continually refined their models and underlying assumptions. This has improved their ability to quantify terrorism risk, but modeling that risk is often more challenging than it is for other hazards.

Less Predictable and Less Data

Compared to hurricanes and earthquakes, for example, acts of terrorism occur less frequently, meaning there’s less data to work with. Terrorist attacks are also less predictable because human attackers are unpredictable. Ultimately, this means that businesses can generally calculate the costs they’re likely to incur in the event of a potential attack, but it’s not as easy to calculate the probability of an attack affecting them.

Still, modeling terrorism risk can inform decisions about how much insurance to purchase, how to structure property terrorism and other insurance policies, and whether to consider a captive or other alternative to commercial insurance. And beyond insurance, modeling can help businesses make smarter choices to mitigate potential attacks and more effectively manage an attack’s after-effects.

The cost of potential attacks to global businesses remains high. The ability of organizations to adapt to the changing pattern of terrorism is essential if they are able to limit the effects of terrorism on their operations and employees. This extends to carefully modeling the impacts of different attacks, and evaluating the financing options that are right for them.

Pakistan-India Trade Grows Despite Tensions

Despite seventy years of constant tension and sabre-rattling, trade between between India and Pakistan is growing steadily. The political relationship between the two countries is dogged by claims and counterclaims of terrorism and cross-border violations—such as recent allegations by Pakistan that their diplomats and families were subjected to harassment and intimidation by Indian intelligence services. The two countries are usually framed in diametrically opposed terms: Pakistan as a nuclear-armed hotbed of terrorism with a penchant for military dictatorships, and India as the world’s largest democracy.

The impact of the tensions on the trade relationship is evident, as seen in this diagram. However, despite a close correlation between trade stagnation and major incidents of violence, the last 20 years have seen a steady growth.

Tremendous Untapped Potential

Observers on both sides lament the unrealized trade between the two countries. The current trade between India and Pakistan amounts to $2.6 billion, but some analysts have estimated that the trade potential could be $19.8 billion dollars annually, while others have estimated the potential to be as high as $40 billion dollars.

Although Indian and Pakistani business leaders are aware of this tremendous potential, both sides continue to protest different aspects of the trade relationship. India has concerns about China’s involvement in the CPEC project—the China–Pakistan Economic Corridor—while Pakistan is worried about India’s bypassing of Pakistan through the development of the Chabahar Port in Iran.

Cement Leads the Way

The balance of trade has traditionally been in India’s favor – imports to Pakistan from India were more than five times the value of exports to India from Pakistan in 2015 – but recently this has begun to shift. A significant factor has been growing demand from India for Pakistani cement, which has been fortunate for Pakistan in light of a drop-off in demand from Afghanistan.

The CPEC projects are also providing needed growth in Pakistan’s domestic economy that can develop further export industries.

India’s industries involving machinery, machine parts, electronic appliances and chemicals are kept buoyant by Pakistan importing these goods from India. Additionally, India seeks to further grow the 40 percent increase in trade it experienced with China in 2017 by pressing the Chinese to open its markets for Indian pharmaceuticals and IT.

Harassment is widespread

Business figures have lobbied their governments to improve trade opportunities, and reduce the disruption on the border in particular, where traders complain of everything from bureaucratic impediments put in their way to harassment by paranoid security forces concerned with their cross-border activities.

Because of pervasive tensions, informal trade is high as it bypasses customs and border officials and uses third party countries such as the United Arab Emirates. However, these circuitous routes add unnecessary time and costs and result in a loss of tax revenue for both governments.  


India and Pakistan have proposed a number of solutions for improving their trade relationship. Long-standing agreements such as the South Asian Free Trade Area can act as a valuable mechanism, while India’s awarding of most favored nation status to Pakistan in 1996 has certainly helped.  

According to Mohsin Khan of the Atlantic Council, Pakistan risks its entire future if it does not recognize India’s role as the regional growth engine, and that “Pakistan must hitch its wagon to the locomotive or risk getting completing left behind.” Regrettably, this is not something Pakistan, with its security narrative of fear and suspicion of India, is ever likely to embrace. Officials promoting cross-border trade continue to weather “nationalistic diatribes” in Pakistan where officials have to deflect any suggestion that their motivations present a danger to the state.

All too frequently, security concerns, either in the form of an actual attack or the specter of violence, intrude into what should be a mutually highly advantageous trading relationship.

How to Avoid Burnout at Work—3 Simple Steps

A recent survey conducted by Duke University and Grenoble École de Management revealed that most CFOs are working an unhealthy 70 hours a week and would prefer to be working close to 50 hours. And CFOs who do work 50 hours a week would prefer to work 40.

Furthermore, according to a CFO survey, only 12 percent of senior finance executives manage to maintain a 50-50 work-life balance (see Figure 1).

if("undefined"==typeof window.datawrapper)window.datawrapper={};window.datawrapper["uC7uB"]={},window.datawrapper["uC7uB"].embedDeltas={"100":707,"200":503,"300":442,"400":405,"500":380,"700":380,"800":362,"900":362,"1000":362},window.datawrapper["uC7uB"].iframe=document.getElementById("datawrapper-chart-uC7uB"),window.datawrapper["uC7uB"]["uC7uB"].embedDeltas[Math.min(1e3,Math.max(100*Math.floor(window.datawrapper["uC7uB"].iframe.offsetWidth/100),100))]+"px",window.addEventListener("message",function(a){if("undefined"!=typeof["datawrapper-height"])for(var b in["datawrapper-height"])if("uC7uB"==b)window.datawrapper["uC7uB"]["datawrapper-height"][b]+"px"});

New Technologies Bring New Responsibilities

The long hours are attributable to an increased role for CFOs, who are responsible for both financial stability and long-term corporate strategy—while facing new challenges in governance, talent strategy, cyber risk, and the use of emerging technologies. As machine learning, AI, and blockchain become integral to all parts of a business, and particularly finance, some predict that the IRS will be collecting taxes via blockchain by 2021 and that AI will account for 30 percent of audits by 2023. CFOs, therefore, have little choice but to acquaint themselves with these technologies.

The CFO, of course, is not the only senior executive who puts in long hours. All senior executives face extremely high job demands and significant pressure to perform. Junior employees are also prone to overwork, whether pushed by their managers or pulled by always-on technology that doesn’t allow people to disconnect.

Burnout is Almost Universal

According to The Wall Street Journal, a Harvard Medical School study found that some 96 percent of senior leaders feel somewhat burned out, and a third describe the syndrome as extreme. These high achievers often feel that they are indispensable and have (or think they have) near-superhuman stamina and resilience.

Whatever the reason for overwork, no senior leadership role can maintain such an unsustainable level of hours logged, week in and week out. A large body of research suggests that long hours usually backfire for both people and their companies, in terms of health issues and lagging productivity. Overwork leads to stress and exhaustion, which can be masked in the short term, but over the long term it leads to faulty decision-making, poor communications, and jumpy emotions.

A Tale of Diminishing Returns

According to the Harvard Business Review, numerous studies by Marianna Virtanen of the Finnish Institute of Occupational Health and her colleagues (as well as other studies) have found that overwork and the resulting stress risks a range of health problems, including impaired sleep, depression, heavy drinking, diabetes, impaired memory, and heart disease.

In a study of consultants by Erin Reid, a professor at Boston University’s Questrom School of Business, managers could not tell the difference between employees who actually worked 80 hours a week and those who just pretended to, suggesting that overwork does not improve productivity.

In sum, the story of overwork is literally a story of diminishing returns: Keep overworking, and you’ll work in a progressively careless manner on tasks that are increasingly meaningless.

How to Protect Your Time

To address these health and productivity issues, CFOs and other senior leaders must protect their most precious resource—time—which is being stolen from them by excessive work demands. To help protect time, here are three practices that have proved to be useful:

Value Personal “Slow Time”

Most leaders’ diaries are jammed with executive meetings, personnel issues, and reading papers and emails. Calendars are full and constantly juggled to squeeze in more events. Much has been written on the value of “slow time” and how leaders need to make time to step back from the business, relax, and take time to think.

To create time for reflection, put two 45-minute slots in your calendar a day to help you step away from your desk and mobile. Find a quiet place where you can step back and reflect on what is happening and how to be more impactful in your work. 

One leader goes to a nearby park. If the weather is good, he sits on the bench; if it is raining, he walks with an umbrella. Such breaks allow him to return to work refreshed, energized, and productive.

Delegate Something You Enjoy

Rather than delegating “low value,” boring, or administrative pieces of your role that don’t motivate people and that you often have to check and redo, list your top five favorite things to do. Now pick two of them that you can delegate to members of your team. 

You will find that they are more motivated because you have asked them to do something that they know you consider important. That will spur them to do a great job—and free up your time.

Use Your First 15 Minutes Wisely

Your first 15 minutes in the office can set the tone for the rest of the day. Rather than diving straight into a meeting or reading email, try to use the first 15 minutes, in three minute chunks, to:

  • Walk around the office and talk to people—ideally connecting with people you don’t normally speak with.
  • Read something that you would not normally read—poetry, literature, a different newspaper.
  • Step back and plan the day—taking time to reflect and think about what you want to achieve.

Simple steps such as these help increase blood flow, build new working relationships, stimulate the mind with new reading, and ensure that you start the day feeling in control.

The challenge is to take the time you gain and use it to do something other than work. Short-term and long-term, this will have a positive impact on your work productivity—and your health.

The Impact of Brexit, by Sector and Region

Brexit will take its largest toll on high-value British industries, such as finance and automotive, and on the city of London in particular, according to a new analysis by Oliver Wyman and law firm Clifford Chance that forecasts the impact of Brexit by sector and region in both the UK and the EU.

In the first of a series of reports, the consultancies estimate that Brexit will add direct annual costs of 31 billion pounds ($43.5 billion) for exporters of the EU-27 countries and 27 billion pounds for UK exporters. That’s the equivalent of 1.5 percent of gross value added (GVA) for the UK and 0.4 percent of GVA for the EU.*

Exhibit 1: Estimated Cost of Tariff and Nontariff Barriers on UK Economy, by Sector


Source: Oliver Wyman and Clifford Chance analysis

The 5 Hardest-Hit Sectors

In the UK, 70 percent of the direct tariff and nontariff costs will be borne by just six industries: financial services, automotive, agriculture, food and drink, consumer goods, and chemicals and plastics. Direct costs will exceed 5 percent of GVA in aerospace, chemicals and plastics, metals and mining, and life sciences, all industries “where firms are highly integrated into European supply chains,” the authors state.

“But the largest impact,” the authors say, “will come from Europe’s financial services, due to London’s role as Europe’s financial center and the fact that it will be hard to mitigate impacts in this sector.”

Exhibit 2: Estimated Cost of Tariff and Nontariff Barriers, by UK Region

Source: ONS 2016; Oliver Wyman and Clifford Chance analysis

Accordingly, London will shoulder about 40 percent of the total cost of Brexit, or about 2.5 percent of its GVA. The impact on other UK regions depends on their industrial mix. In some regions, the lost GVA approaches that of London.

Impact on the EU

As in the UK, a handful of high-value industries in the EU will bear the brunt of direct costs from Brexit. But “at an aggregate level EU-27 firms are better positioned to mitigate cost increases,” the report states. “This is because a larger proportion of their exports are in goods rather than services, and they also typically have a wider range of alternative suppliers to choose from within the EU-27.”

Exhibit 3: Estimated Cost of Tariff and Nontariff Barriers on EU-27, by Sector

Source: Oliver Wyman and Clifford Chance analysis

“Country-level differences will be pronounced,” the report adds. “In Ireland, for example, the exposure of the agricultural sector to UK consumers is a particular pinch point, and in Germany four of the sixteen states—Bavaria, Baden-Württemberg, North Rhine-Westphalia, and Lower Saxony—will shoulder around 70 percent of the total impact due to their respective strength in automotive and manufacturing.”

If the UK remains in a comprehensive customs union with the UK, then the costs of Brexit would be cut by nearly 40 percent for the UK and by more than 50 percent for the EU, the authors say. But the benefits would mostly be felt outside of London, because customs unions cover goods rather than services.

Small Companies Will Be Worst Hit

Some companies will be in a better position than others to mitigate the impact of Brexit. Auto and aerospace firms might be able to switch to domestic suppliers, while financial firms will have fewer such opportunities.

Smaller firms are also more likely to be caught flat-footed: They are more likely to have no experience trading outside the EU and therefore no experience in dealing with cross-border trade.

As the post-Brexit picture becomes clearer, exporters should begin to prepare for the outcome, the authors say. “The best prepared firms are preparing contingencies now based on the direct impacts on themselves, their supply chains, customers, and competitors.”

Exhibit 4: Operational and Strategic Considerations

Source: Oliver Wyman and Clifford Chance

Future reports will delve deeper into topics such as the effect of Brexit on costs and pricing decisions by firms. See the full report for more details on methodology and impacts.

*Gross value added (GVA) is a measure of the value of goods and services provided in an area, industry or sector. GVA is GDP plus subsidies minus taxes.

The report focuses on direct impact of new tariff and nontariff barriers (such as regulatory restrictions) assuming that the UK and EU-27 revert to a World Trade Organization trading relationship. It does not model impacts such as migration, pricing changes or inter-country free trade agreements, which are likely to exacerbate the impact. 

Is the Future of Farming Vertical?

A century ago, fresh produce had no choice but to be local and seasonal, but technology changed all of that. Innovations in refrigeration and transportation allowed food production to concentrate in regions that could produce a wider variety of crops all year round. Today, the U.S. has a third as many farms and three times as many people as it did a century ago.

Our food is coming from fewer places, but feeding more people, most of whom live in cities. Three states—California, Arizona, and Florida—produce more than three-quarters of the nation’s vegetables, measured by value. California alone produces 400 different commodities, including one-third of all U.S. fresh fruits and vegetables and two-thirds of all the nuts.

The Pressure of Climate Change

Once again, America is having to rethink where and how it produces its food. In the 21st century, the U.S. food system is likely to change even more than it did in the past century. Because of climate change, major production areas such as California will experience extremes in temperature and precipitation, generally growing hotter and drier—and all at a pace that appears to be happening sooner than predicted.

The U.S. food system needs to diversify production. But instead of expanding into grasslands or areas already used for other crops, we should think about growing food at scale in big cities. Would our food system benefit from “vertical farms”? And if so, can we seize an opportunity to use existing, stranded assets?

Are Vertical Farms the Answer?

Vertical farms are usually indoor operations with stacked or wall-like planters that leverage networked technology to monitor and nourish plants precisely, often without the use of soil.

AeroFarms is one notable vertical farm, headquartered in a former steel mill in Newark, New Jersey. Though its flagship farm was only seeded in September 2016, it reports yielding up to 1,000 tons of greens per acre in a year. Plenty Inc. has vertical farms just outside of San Francisco and Seattle and uses 20-foot-high walls, from which the company reports significantly higher yields and less water use than conventional agriculture. With the costs of production coming down, it claims to offer “Whole Foods quality at Walmart prices.”

Taking Advantage of Stranded Assets

At least 100 vertical food production startups are located in U.S. cities, but few take advantage of stranded assets, such as old thermal power plants.

Thermal power plants have qualities that make them inherently amenable to vertical farming. They consume about 45 to 50 percent of all the water used to cool plants during power generation. Disposing of the hot water is both a nuisance and a cost. Heat, water, energy and captured slipstream emissions are all byproducts of energy generation and could be available for producing food.

By creating vertical, urban food production, we can give consumers what they want: local food that’s produced transparently and sustainably.

Reusing this heat, water and energy can create new income streams for power plants and reduce greenhouse gas emissions. Thermal power plants are often located near hubs of the U.S. Postal Service. Could the post office begin to distribute fresh produce locally?

There are social advantages, too. In most urban areas, thermal power plants are surrounded by low-value brownfields that have little or no productive use. Many have had to be been taken over by cities for back taxes, and they are usually in “food deserts”—poor neighborhoods with little to no access to grocery stores with fresh produce. These areas could benefit from vertical farms and fresh produce.

Cutting Down on Waste

While there’s a perception that local food has a smaller carbon footprint because it travels shorter distances from farm to table, research shows that transportation comprises a small portion of food’s greenhouse gas emissions. That said, moving food production closer to consumers would still deliver environmental benefits by reducing waste.

About a third of the food that’s produced around the world is wasted. Estimates are slightly higher in the U.S., where as much as half of the fruits and vegetables grown in the U.S. are not eaten.

They are lost or wasted in the field, in transit, in supermarkets, in food service, and in the home—a waste not only of food, but also of the land, water, energy, greenhouse gases, fertilizer, soil and other resources associated with producing it. If produce could be grown closer to where it’s consumed, it could shorten the time between harvest and sale by two or three weeks, extend the life in the home, and reduce waste significantly.

Narrowing the Gap between Farm and Table …

Finally, vertical farming can create business opportunities. The centralization and specialization of food production has put considerable distance between consumers and their food, both figuratively and literally. By creating vertical, urban food production, we can narrow that gap and give more consumers what they want: local food that’s produced more transparently and sustainably, that is, with less waste and fewer impacts.

… But More Research Is Needed

How much energy is needed to grow food without the sun or soil? How much will it cost? How much better is vertical farming in terms of net greenhouse gas emissions for food consumed? If urban, vertical farming takes the risks—precipitation, severe storms, heat and cold—out of farming, will large companies dominate the food system even more than they do today? Can communities be engaged to ensure a more equitable and inclusive food system? And, all these questions aside, what are the most likely unintended consequences? The sooner we answer these questions, the sooner we can act.

Many parties have vested interests in the answers: Power companies can benefit from added value; retailers can benefit from shorter supply chains and reduced waste; government agencies can improve local food systems; communities can eliminate their food deserts; tech companies can drive research and development; and academics and research institutions can build capacity about new ways to grow our food.

Preserving Biodiversity

The biggest threat to protecting biodiversity and critical ecosystem functions is where and how we produce food. As long as we depend on soil to produce food, we will require more and more land and greater surpluses to feed everyone. Producing food in cities could be part of the solution by producing more with less which would relieve pressure on the natural resource base.

Two Simple Steps to Unlock the Hidden Wealth of Cities

I was in Boston recently. Apart from the bone-numbing cold, what struck me most was its public transport system, which is much better than in most US cities. The Massachusetts Bay Transportation Authority’s omnipresent “T” signs guided me towards a multitude of subway lines, buses and commuter trains that made a car far less essential than in many other parts of the United States. In fact, Boston is planning to boost its transport system even further by exploring the use of self-driving vehicles in an initiative with the World Economic Forum.

But while Boston’s transport is good, it’s perhaps still not as good as it could be. One reason is that like many other cities, Boston does not assess the market value of its economic assets.

Unlocking the public value of poorly utilized real estate or monetizing its transportation and utility assets—smarter asset management in other words—would yield a return that would enable it to more than double its infrastructure investments. Through smarter asset management, Boston could improve its public transport system and other services without needing to opt for privatization, raise taxes or cut spending elsewhere.

What’s the catch? Actually, there isn’t one.

Opening Boston’s Books

For the past 50 years, government ownership of vast commercial holdings has triggered a phony war between private or public ownership, especially in Europe, but recently also in the United States. What matters most is the quality of asset management, rather than whether it is public or private.

Source: World Economic Forum

Drawing up the balance sheet

Compiling an accurate balance sheet–something that is, despite its importance, shockingly rare in most cities – is a crucial step towards adopting a management-focused approach. With a list of assets in hand, and a proper understanding of their market value, taxpayers, politicians, and investors can better assess the long-term consequences of political decisions.

Let’s look at Boston.

At first glance, the city does not appear to be particularly wealthy. Its financial statements underestimate the true value of public assets, reporting total assets worth $3.8 billion, of which $1.4 billion is real estate. That is slightly less than its liabilities of $4.6 billion in 2015.

Source: World Economic Forum

Untapped Wealth…

However, like most cities, Boston reports its assets at book value, which is tied to the historic cost. If holdings were reported using the International Financial Reporting Standards, which require the use of market value for assets, Boston’s holdings would be worth significantly more than what is currently reported. In other words, the city is operating without fully understanding its hidden wealth.

And that wealth is vast.

An estimate of the market value of Boston’s property portfolio suggests that the city’s real estate alone is worth some $55 billion. But because Boston’s leaders have not accounted for this value, they cannot fully measure the cost of leaving these assets under-managed. If they could, they would get a sense of the benefits to be gained by developing these assets more astutely.

…Produces Significant Yield

After accounting for the market value of municipal assets, the next step towards sound asset management is to understand the yield a city earns from the revenue and rising market value of its assets. This is crucial for comparing all investment options, but also for determining whether performance has been satisfactory, and to show stakeholders their wealth is being managed responsibly.

Using Boston as an example again, let’s cautiously assume the city could earn a 3 percent yield on its commercial assets with more professional and politically independent management. A modest yield of 3 percent on a portfolio worth $55 billion would amount to an income exceeding its current total revenues, and therefore enable it to more than double its investments.

Source: World Economic Forum

And Boston is by no means exceptional. On the contrary, its approach to historical valuation is shared by cities worldwide. As a result, public wealth is trapped in real estate and other non-optimized commercial assets.

Introducing the ‘Urban Wealth Fund’

The best way forward would be to consolidate publicly owned assets in a common investment vehicle that Swedish economist Stefan Fölster and I have called an “urban wealth fund.” The fund would be managed at arm’s length in a transparent, accountable manner, guided by a city mandate but directed by a dedicated professional staff to keep it free from political influence.

This sounds challenging, but it can be done. Hamburg’s HafenCity GmbH, and parts of Copenhagen that were revitalized by the City & Port Development Company, are just two examples of urban areas that have used this type of development mechanism. These efforts have not only increased the amount of residential housing; they have also funded vital infrastructure such as the Copenhagen Metro, schools, and universities. In Hamburg, the recently opened Elbe Symphony Hall was also funded via a government-owned holding company.

Accelerating Sustainable Cities

Managing city assets better would help local leaders boost their economies, finance social and economic infrastructure, including affordable housing and develop strategies for vibrant and innovative mixed-use projects. Better management of city assets would also help cover the costs of required maintenance without competing with government budgets, leaving more for spending on health care, education, and other social initiatives.

Professional management of the public assets that are already in place will ultimately accelerate the development of human-centered, sustainable and affordable urban infrastructure and services.

As cities prepare for the challenges of the fourth industrial revolution, this will enable them to develop innovative policies and invest in the future.

This article was first published by the World Economic Forum.

Governance Remains ​a ​Key Imperative ​as Blockchain Evolves

This is the final piece in a five-part series on the business impact of blockchain technology.

To wrap up blockchain week, BRINK had a conversation with Joanna Hubbard, CEO of Electron, a London-based blockchain startup, about the role of government in blockchain, the opportunities it presents for new business models to flourish, and the risks associated with the technology. This Q&A has been edited for length and clarity.

BRINK: How has blockchain continued the trend toward decentralized power sources, which was already underway even before blockchain appeared on the scene?

Joanna Hubbard: I don’t think blockchain is changing decentralization—the cat was out of the bag already. I think what it’s doing, or what it’s capable of doing, is creating a new coordinating architecture that allows you to actually engage really small assets as a kind of safe central course. You can essentially coordinate all of your trading interests on the platform so that everyone trusts that they’re going to get from the market what they want from the market. I think what blockchain will enable us to develop is that kind of market trustee. You bring enough equality and different types of assets and products into a market, and you essentially trust the really diverse set of assets and the set of requirements designed to help balance the kind of increasingly intermittent overall system.

BRINK: What’s the role of government in the energy sector as it relates to blockchain?

Hubbard: I’m going to be really unpopular in saying this, but I think blockchain is a fantastic source of governance. I think that government will be involved in setting the essential basic rules and parameters of the systems and evolving those rules. A lot of the rules are going to be around data validity: what kind of license do you need to be paid at, what data sets can you see, how much of a system can you interact with? I think governments are going to be key in answering those questions.

Because energy is a local market product, different governments and different regions are going to have different rules. These different programs on the system create a kind of transparency of who’s going to be expected to trade and on what basis—and that gives rise to a business world with a much more efficient market structure.

You can mitigate a lot of the classic risks around hacking through running a consortium blockchain with visibility, transparency and data.

BRINK: But doesn’t blockchain bypass the role of government, as it were? Does government still need to play a role in this?

Hubbard: Absolutely, in the end it does. If we’re talking about consortium blockchains, that means that you have to be commissioned to participate, and even then you have to be commissioned under full sanctions. So governments are creating rules around, for example, how much price risk the customer can be exposed to. So if someone wants to change their trade, they might not want to turn their electric power off and find that they got charged 1,000 pounds for half an hour of electricity. They need protection, and that will be a role specific to government and built in to the trading consortium blockchain.

BRINK: Right. But wouldn’t they need to be in the business of owning energy sources?

Hubbard: No. And I really think they shouldn’t be in the business of owning energy sources. I don’t think that’s how you get an efficient market outcome.

BRINK: What sorts of new businesses do you think will emerge in the next five to 10 years as a result of blockchain?

Hubbard: Partly data-driven businesses, partly distributed energy resources (DER) aggregation businesses and also shift energy businesses. For example, a company is already looking at providing batteries for electric vehicles based on the fact that they can then trade those batteries within the vehicle, which takes away the worry of vehicle owners that the battery will be degraded too fast. It also enables someone to essentially aggregate massive amounts of flexible kinds of distributor loads. I also think the aggregator model has been really under-scaled recently. Who will provide people with services that manage all their utilities, trade those utilities for them and guarantee the price for them? Basically we should expect to see business models that require much more granular information and more open market access that we don’t have today.

BRINK: You’ve clearly gone into this business because you see this as a huge opportunity. But what are the kind of risks involved in shifting to a blockchain-oriented economy?

Hubbard: I think you can mitigate a lot of the classic risks around hacking through running a consortium blockchain with visibility, transparency and data. A consortium blockchain is less likely to be hacked than a public blockchain because you have to have permission to engage on that blockchain. And even if that node was taken over, the other nodes would be able to identify that the node was hacked into and cut it off the system. And you would be able to reverse that stream before it was hacked into. So that would reduce some of those risks that are most common.


HKLPA (@the_hklpa) Tweets

RT @NDDCEL: Ethics training is broken. Can #storytelling fix it? 4 weeks 15 hours ago
RT @EthicalSystems: "We are trying to give advice to organizations that are incredibly complex. When you put individuals together, they… 1 month 1 week ago
RT @sh_oldenberg: To Understand Complexity, Use 7 Dimensions of Ethical Thinking 2 months 1 week ago
RT @ComplianceXprts: 7 Things Every SME Exporter Needs To Know About Protecting Their Brand 2 months 2 weeks ago
RT @ComplianceXprts: Exporters Guide To Managing Compliance - Download our free ebook now! 2 months 2 weeks ago
RT @mikevolkov20: Episode 14 - What Every Compliance Officer Needs to Know About Data Privacy and the EU's GDPR - Corruption, Crime &… 3 months 3 weeks ago
RT @ComplianceXprts: What You Need To Know About Auditing And Risk Management In The Transport Industry 4 months 5 days ago
RT @EthicalSystems: Our 2017 End of Year Letter from @JonHaidt and @azishf "This is the time for the business… 4 months 1 week ago
RT @ComplianceXprts: Inspection of Facilities and Sporting Venues - Due Diligence 4 months 1 week ago
RT @ComplianceXprts: 14 Essentials For Your Compliance Management System 4 months 3 weeks ago