Technology Evidence File

Jun19: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
A New Law Suggests Quantum Supremacy Could Happen This Year” by Kevin Hartnett from Quanta Magazine, in Scientific American 21Jun19

“Quantum computers are improving at a doubly exponential rate… That rapid improvement has led to what’s being called ‘Neven’s law,’ [named after Hartmut Neven Director of Google’s Quantum Artificial Intelligence Lab] a new kind of rule to describe how quickly quantum computers are gaining on classical ones… With double exponential growth, ‘it looks like nothing is happening, nothing is happening, and then whoops, suddenly you’re in a different world,’ Neven said. ‘That’s what we’re experiencing here.’”

“The doubly exponential rate at which, according to Neven, quantum computers are gaining on classical ones is a result of two exponential factors combined with each other.

“The first is that quantum computers have an intrinsic exponential advantage over classical ones: If a quantum circuit has four quantum bits, for example, it takes a classical circuit
with 16 ordinary bits to achieve equivalent computational power. This would be true even if quantum technology never improved The second exponential factor comes from the rapid improvement of quantum processors. Neven says that Google’s best quantum chips have recently been improving at an exponential rate.”
Progress in Quantum Computing creates the potential for an exponential increase in the speed at which artificial intelligence technologies improve. However, that also requires substantial advances in the rate at which software improves. As we have noted in past issues, this includes the rate at which AI software advances from associational/statistical approaches to ones based on far more difficult causal and counterfactual reasoning, which in turn heavily depend on advances in natural language processing.

In June, four new papers provided new indications of progress in this area.
In “A Survey of Reinforcement Learning Informed by Natural Language”, Luketina et al find that, “To be successful in real-world tasks, Reinforcement Learning (RL) needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand. Recent advances in representation learning for language make it possible to build models that acquire world knowledge from text corpora and integrate this knowledge into downstream decision making problems.”

In “Shaping Belief States with Generative Environmental Models for Reinforcement Learning”, Gregor et al from DeepMind note that they “are interested in making agents that can solve a wide range of tasks in complex and dynamic environments. While tasks may be vastly different from each other, there is a large amount of structure in the world that can be captured and used by the agents in a task-independent manner.

“This observation is consistent with the view that such general agents must understand the world around them. Algorithms that learn representations by exploiting structure in the data that are general enough to support a wide range of downstream tasks is what we refer to as unsupervised learning or self-supervised learning. We hypothesize that an ideal unsupervised learning algorithm should use past observations to create a stable representation of the environment.

That is, a representation that captures the global factors of variation of the environment in a temporally coherent way.

“As an example, consider an agent navigating in a complex landscape. At any given time, only a small part of the environment is observable from the perspective of the agent. The frames that this agent observes can vary significantly over time, even though the global structure oft he environment is relatively static with only a few moving objects. A useful representation of such an environment would contain, for example, a map describing the overall layout of the terrain. Our goal is to learn such representations in a general manner. Predictive models have long been hypothesized as a general mechanism to produce useful representations based on which an agent can perform a wide variety of tasks in partially observed worlds.”

The authors then describe their progress toward achieving this goal, give an example of the current state of development, and describe the remaining obstacles to be overcome.

In “Deep Reasoning Networks: Thinking Fast and Slow”, Chen et al address the challenge described by the DeepMind team, and “introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with reasoning for solving complex tasks, typically in an unsupervised or weakly-supervised setting. DRNets exploit problem structure and prior knowledge by tightly combining logic and constraint reasoning with stochastic-gradient-based neural network optimization.” They conclude with examples that show their approach produces substantial gains in performance compared to previous techniques.

Finally, in “Does It Make Sense? And Why? A Pilot Study for Sense Making and Explanation”, Wang et al observe that, “introducing common sense to natural language understanding systems has received increasing research attention. [Yet] It remains a fundamental question on how to evaluate whether a system has a sense making capability. Existing benchmarks measures commonsense knowledge indirectly and without explanation.”

To accelerate this process, they “release a benchmark to directly test whether a system can differentiate natural language statements that make sense from those that do not make sense. In addition, a system is asked to identify the most crucial reason why a statement does not make sense. We evaluate models trained over large-scale language modeling tasks as well as human performance, and show the different challenges for system sense making.”
"Predicting Research Trends with Semantic and Neural Networks with an Application in Quantum Physics", by Krenn and Zeilinger
This is a very thought-provoking paper on how advances in natural language processing and network analysis can be combined with a large body of textual data to both forecast and identify potential scientific (and engineering) advances. In theory, this approach has the potential to increase the productivity of R&D spending, which has been declining in recent years.
May19: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
Gifted classes may not help talented students move ahead faster”, by Jill Barshay, Hechinger Report
“A large survey of 2,000 elementary schools in three states found that not much advanced content is actually being taught to gifted students…“Teachers and educators are not super supportive of acceleration,” said Betsy McCoach, one of the researchers and a professor at the University of Connecticut.”

At a time when performance is increasingly dependent on a small number of “hyperperformers” or “superstar” talent (e.g., see, The Best and The Rest: Revisiting The Norm Of Normality Of Individual Performance” by O’Boyle and Aguinis, and “Superstars: The Dynamics of Firms, Sectors, and Cities Leading the Global Economy” by Manyika et al from McKinsey), this finding that the education of America’s most talented students is largely being neglected by public schools has worrisome implications for future performance.
COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration,” by Watters et al from Deep Mind
Progress in unsupervised reinforcement learning is a key indicator of developing AI capability.

“Recent advances in deep reinforcement learning (RL) have shown remarkable success on challenging tasks. However, data efficiency and robustness to new contexts remain persistent challenges for deep RL algorithms, especially when the goal is for agents to learn practical tasks with limited supervision. Drawing inspiration from self-supervised “play” in human development, we introduce an agent that learns object-centric representations of its environment without supervision and subsequently harnesses these to learn policies efficiency and robustly.”
Cognitive Model Priors for Predicting Human Decisions” by Bourgin et al
This is yet another indicator of accelerating improvement in AI technologies.

“Human decision-making underlies all economic behavior. For the past four decades, human decision-making under uncertainty has continued to be explained by theoretical models based on prospect theory, a framework that was awarded the Nobel Prize in Economic Sciences. However, theoretical models of this kind have developed slowly, and robust, high-precision predictive models of human decisions remain a challenge.

“While machine learning is a natural candidate for solving these problems, it is currently unclear to what extent it can improve predictions obtained by current

theories. We argue that this is mainly due to data scarcity, since noisy human behavior requires massive sample sizes to be accurately captured by off-the-shelf machine learning methods.”

“To solve this problem, what is needed are machine learning models with appropriate inductive biases for capturing human behavior, and larger datasets. We offer two contributions towards this end:

“First, we construct “cognitive model priors” by pretraining neural networks with synthetic data generated by cognitive models (i.e., theoretical models developed by cognitive psychologists).

“We find that fine-tuning these networks on small datasets of real human decisions results in unprecedented state-of-the-art improvements on two benchmark datasets.”

Second, we present the first large-scale dataset for human decision-making, containing over 240,000 human judgments across over 13,000 decision problems. This dataset reveals the circumstances where cognitive model priors are useful, and provides a new standard for benchmarking prediction of human decisions under uncertainty.”
Robots and Firms” by Koch et al
This study is based on unique micro-level evidence, and highlights the substantial disruption that lies ahead as the adoption of AI and automation technologies accelerates.

The authors “study the implications of robot adoption at the level of individual firms using a rich panel data-set of Spanish manufacturing firms over a 27-year period (1990-2016). We focus on three central questions: (1) Which firms adopt robots? (2) What are the labor market effects of robot adoption at the firm level? (3) How does firm heterogeneity in robot adoption affect the industry equilibrium?...

“As for the first question, we establish robust evidence that ex-ante larger and more productive firms are more likely to adopt robots, while ex-ante more skill-intensive firms are less likely to do so. As for the second question, we find that robot adoption generates substantial output gains in the vicinity of 20-25% within four years, reduces the labor cost share by 5-7% points, and leads to net job creation at a rate of 10%. Finally, we reveal substantial job losses in firms that do not adopt robots, and a productivity-enhancing reallocation of labor across firms, away from non-adopters, and toward adopters.”

Unfortunately, the authors don’t report the impact on compensation of the employment changes they highlight.
Coursera Global Skills Index 2019
This report provides further evidence that exponentially improving automation and AI technologies seem increasingly likely to produce substantial economic and social disruption, with, at this point, uncertain by likely negative political consequences.

“Two-thirds of the world’s population is falling behind in critical skills, including 90% of developing economies. Countries that rank in the lagging or emerging categories (the bottom two quartiles) in at least one domain [Business, Technology, and Data Science] make up 66% of the world’s population, indicating a critical need to upskill the global workforce. Such a large proportion of ill-prepared workers calls for greater investment in learning to ensure they remain competitive in the new economy…

“Europe is the global skills leader. European countries make up over 80% of the cutting-edge category (top quartile globally) across Business, Technology, and Data Science. Finland, Switzerland, Austria, Sweden, Germany, Belgium, Norway, and the Netherlands are consistently cutting-edge in all the three domains. This advanced skill level is likely a result of Europe’s heavy institutional investment in education via workforce development and public education initiatives…

“Skill performance within Europe still varies, though. Countries in Eastern Europe with less economic stability don’t perform as well as Western Europe in the three domains; Turkey, Ukraine, and Greece consistently land in the bottom half globally.

“Asia Pacific, the Middle East and Africa, and Latin America have high skill inequality…

“The United States must upskill while minding regional differences. Although known as a business leader for innovation, the U.S. hovers around the middle of the global rankings and is not cutting-edge in any of the three domains. While there’s a need for increased training across the U.S., skill levels vary between sub-regions.

“The West leads in Technology and Data Science, reflecting the concentration of talent in areas like Silicon Valley. The Midwest shines in Business, ranking first or second in every competency except finance.

“The South consistently ranks last in each domain and competency, suggesting a need for more robust training programs in the sub-region.”
The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand” by Acemoglu and Restrepo
Written by two leading academic analysts of the economic and social impacts of advancing AI technologies and their implementation, this new paper provides an important warning about the disruption that lies ahead of current trends continue.

“Artificial Intelligence is set to influence every aspect of our lives, not least the way production is organized. AI, as a technology platform, can automate tasks previously performed by labor or create new tasks and activities in which humans can be productively employed.

“Recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality and lower productivity growth.

“The current tendency is to develop AI in the direction of further automation, but this might mean missing out on the promise of the ‘right’ kind of AI with better economic and social outcomes.”
Apr19: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
Challenges of Real-World Reinforcement Learning”, by Dulac-Arnold et al

This paper is a timely reminder of the current limitations of this critical AI method when it applied to the complex, evolving problems that we frequently confront.

“Reinforcement learning (RL) has proven its worth in a series of artificial domains, and is beginning to show some successes in real-world scenarios. However, much of the research advances in RL are often hard to leverage in real world systems due to a series of assumptions that are rarely satisfied in practice.

“We present a set of nine unique challenges that must be addressed to ‘productionize’ RL to real world problems. At a high level, these challenges are:

(1) Training off-line from the fixed logs of an external behavior policy.
(2) Learning on the real system from limited samples.

(3) High-dimensional continuous state and action spaces.

(4) Safety constraints that should never or at least rarely be violated.

(5) Tasks that may be partially observable, alternatively viewed as non-stationary or stochastic.

(6) Reward functions that are unspecified, multi-objective, or risk-sensitive.

(7) System operators who desire explainable policies and actions.

(8) Inference that must happen in real-time at the control frequency of the system.

(9) Large and/or unknown delays in the system actuators, sensors, or rewards.
“While there has been research focusing on these challenges individually, there has been little research on algorithms that address all of these challenges together…An approach that does this would be applicable to a large number of real world problems.”
Survey on Automated Machine Learning”, by Zoller and Huber
Like the paper above, this is another important indicator of the type of obstacles that will need to be overcome in order to speed the deployment of artificial intelligence applications.

“Machine learning has become a vital part in many aspects of our daily life. However, building well performing machine learning applications requires highly specialized data scientists and domain experts. Automated machine learning (AutoML) aims to reduce the demand for data scientists by enabling domain experts to automatically build machine learning applications without extensive knowledge of statistics and machine learning. In this survey, we summarize the recent developments in academy and industry regarding AutoML.”
Key Design Components and Considerations for Establishing a Single Payer Healthcare System”, by the US Congressional Budget Office
This is an excellent and brief summary of the key issues involved in establishing a more comprehensive single payer healthcare system in the United States.
Prices Paid to Hospitals by Private Health Plans are High Relative to Medicare [Prices] and Vary Widely” by White and Whaley from RAND
The US healthcare system is characterized by multiple prices being charged by healthcare suppliers depending on who is paying. The lowest prices are for beneficiaries of the federal Medicare system for senior citizens; the highest are for uninsured individuals. Based on a sample of hospital prices charged in 25 states, this new report from RAND finds that in some states, charges to private health insurance payers are more than 300% higher than those charged to Medicare. This translates into higher insurance premiums for employers and reduced take-home pay for employees as their benefits costs increase.

The authors conclude that the potential for cost savings from a “single price” system are, frankly, enormous. Whether moving to a single payer system would be required to move to a single price system is an uncertainty, as it is not clear that private health insurance payers and employer who purchase such insurance have enough power to bring about this change on their own.
After 20 Years of Reform, Are America’s Schools Better Off?”, by Hess and Martin
In the short-term, lack of improvement in US education results will slow the deployment of AI and automation technologies; in the medium and long-term, however, it will speed many companies’ transition to “labor-lite” business models (e.g., based on AutoML technology noted above). This will likely worsen inequality and political conflict, increase social safety net spending, and lead to much higher taxes to fund it.

“On the whole, it’s certainly possible to find some evidence of improvement — but progress is easiest to find in the metrics most amenable to manipulation…the U.S. also regularly administers the National Assessment of Educational Progress (NAEP) to a random, nationally representative set of schools. Because the NAEP isn’t linked to state accountability systems, it’s a good way to check the seemingly positive results of state tests.

“From 2000 to 2017 (the most recent year for which data is available), NAEP scores showed that fourth-grade math results increased 14 points, which reflects a bit more than one year of extra learning. Eighth-grade math results also demonstrated significant improvement, increasing ten points in the same period. Fourth- and eighth-grade reading scores, meanwhile, barely budged. And almost all of the math gains were made in the decade from 2000 to 2010; performance has pretty much flatlined since then…

“The Programme for International Student Assessment (PISA) is the only major international assessment of both reading and math performance. While PISA has its share of limitations, it offers a wholly independent view of American education and accountability systems.

From the time PISA was first administered in 2000 to the most recent results from 2015, U.S. scores have actually declined, while America’s international ranking has remained largely static…there has been a lot of action, but not much in the way of demonstrated improvement. Just why this is the case remains an open question.”

See also, “Why American Students Haven’t Gotten Better at Reading in 20 Years”, by Natalie Wexler in The Atlantic, and “US Achievement Gaps Hold Steady in the Face of Substantial Policy Initiatives” by Hanushek et al
Predicting Success in the Worldwide Start-Up Network”, by Nonaventura et al
This paper highlights the increasing ability of advanced AI techniques (combined, in this case, with social network analysis methods) to either automate or augment cognitively complex activities, while also improving predictive performance. The speed and breadth at which these emerging technologies will be deployed remains highly uncertain; however, this paper, and others like it, provide an indication of what lies ahead.

“By drawing on large-scale online data we construct and analyze the time varying worldwide network of professional relationships among start-ups. The nodes of this network represent companies, while the links model the flow of employees and the associated transfer of know-how across companies. We use network centrality measures to assess, at an early stage, the likelihood of the long-term positive performance of a start-up, showing that the start-up network has predictive power and provides valuable recommendations doubling the current state of the art performance of venture funds.

Our network-based approach not only offers an effective alternative to the labour-intensive screening processes of venture capital firms, but can also enable entrepreneurs and policy-makers to conduct a more objective assessment of the long-term potentials of innovation ecosystems and to target interventions accordingly.”
Mar19: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
Rigorous Agent Evaluation: An Adversarial Approach To Uncover Catastrophic Failures”, by Uesato et al from Deep Mind
This paper details how generative adversarial networks can be used to identify catastrophic failure modes in complex adaptive system. As this technology is further developed, it has potentially very important applications, in both the national security and economic sectors.
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, And Sentences From Natural Supervision”, by Mao, et al from MIT, IBM, and DeepMind
The authors describe a potentially very powerful new approach to AI, which combines symbolic concept learning with a deep neural network. This leads to a substantial reduction in the amount of training data required, as well as a sharp improvement in the speed of concept learning, and thus, potentially, improvements in transfer learning (i.e., the application of concepts to novel situations).
Tackling Europe’s Gap in Digital and AI” by the McKinsey Global Institute
This report’s discouraging conclusions about the state of digitization and AI adoption in Europe has important implications for the region’s future economic growth, national security spending, social conditions, and political conflicts.

“Digitisation is an important technical and organisational precondition for the spread of AI, yet Europe’s digital gap—at about 35 percent with the United States—has not narrowed in recent years. Early digital companies have been the first to develop strong positions in AI, yet only two European companies are in the worldwide digital top 30, and Europe is home to only 10 percent of the world’s digital unicorns...”

“Europe has about 25 percent of AI startups, but its early-stage investment in AI lags behind that of the United States and China. Further, with the exception of smart robotics, Europe is not ahead of the United States in AI diffusion, and less than half of European firms have adopted one AI technology, with a majority of those still in the pilot stage.”
Informed Machine Learning – Towards a Taxonomy of Explicit Integration of Knowledge into Machine Learning”, by von Rueden et al

One of the constraints on the development and application of artificial intelligence technologies has been their need for large amounts of training data. We have previously noted the development of generative adversarial networks which generate artificial training data to speed the learning process. Another approach is the inclusion of domain knowledge to speed learning. This approach also potentially facilitates the development of abstractions by AI, which in turn facilitates “transfer learning”, or the application of conceptual abstractions to new situations, as is the case in human learning.

This paper provides a useful taxonomy that helps you to understand this knowledge application process, and thus to develop different indicators as to its progress.
The Algorithmic Automation Problem: Prediction, Triage, and Human Effort”, by Rahu et al
As we have noted in past issues, the deployment of AI technologies is proceeding more slowly than some had expected. This paper describes how one of the underlying problems – determining whether an AI or a human being is best suited to perform a given task – can be more efficiently addressed. It thus provides another indicator that can be used to improve estimates of the speed of AI deployment, and thus their future impact on the economy, society, and politics.
How Aligned is Career and Technical Education to Local Labor Markets?”, by the Fordham Institute
As we have noted before, both education and healthcare provision are two critical enabling “social technologies” that have profound “downstream” impacts on economic, national security, social and political issues.

This depressing report highlights the surprising extent to which CTE programs in the United States (usually known as Vocational and Technical Education in other nations) are not providing students with competences and credentials that are highly valued in private sector labor markets. If not corrected, this will contribute to weaker economic growth, more demand for social safety net spending, and probably higher levels of social and political conflict.
Causal Effect Identification from Multiple Incomplete Data Sources: A General Search-based Approach”, by Tikka et al
This is a very technical paper, but highlights use of Judea Pearl’s do-calculus in automated search for causal relationships in a large data set. As such, it is a key indicator of AI progress in the critical area of causal (and counterfactual) modeling.
How China tried and failed to win the AI race: The inside story”, by Alison Rayome in Tech Republic
The author claims that, “China fooled the world into believing it is winning the AI race, when really it is only just getting started.”

However, a close reading of the article leaves us with a less smug conclusion, based on the distinction between a snapshot of a situation and how fast it is evolving over time. A more accurate conclusion seems to be that while the US is currently ahead of China in key areas of AI (including the chips on which AI software runs), in some areas the pace of improvement in China seems to be faster than the pace in the US (e.g., how lack of privacy concerns is enabling the creation of very large training data sets).

In sum, the claim that “China has tried and failed to win the AI race” seems very premature.
Once Hailed as Unhackable, Blockchains are Getting Hacked” by Mike Orcutt in MIT Technology Review
“A blockchain is a cryptographic database maintained by a network of computers, each of which stores a copy of the most uptodate version. A blockchain protocol is a set of rules that dictate how the computers in the network, called nodes, should verify new transactions and add them to the database. The protocol employs cryptography, game theory, and economics to create incentives for the nodes to work toward securing the network instead of attacking it for personal gain. If set up correctly, this system can make it extremely difficult and expensive to add false transactions but relatively easy to verify valid ones.”

“That’s what’s made the technology so appealing to many industries…But the more complex a blockchain system is, the more ways there are to make mistakes while setting it up.”

“We’ve long known that just as blockchains have unique security features, they have unique vulnerabilities. Marketing slogans and headlines that called the technology “unhackable” were dead wrong.

That’s been understood, at least in theory, since Bitcoin emerged a decade ago. But in the past year, amidst a Cambrian explosion of new cryptocurrency projects, we’ve started to see what this means in practice—and what these inherent weaknesses could mean for the future of blockchains and digital assets.”
Feb19: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
The Hanabi Challenge: A New Frontier for AI Research”, by Bard et All from DeepMind.
“From the early days of computing, games have been important testbeds for studying how well machines can do sophisticated decision making. In recent years, machine learning has made dramatic advances with artificial agents reaching superhuman performance in challenge domains like Go, Atari, and some variants of poker. As with their predecessors of chess, checkers, and backgammon, these game domains have driven research by providing sophisticated yet well-defined challenges for artificial intelligence practitioners.

We continue this tradition by proposing the game of Hanabi as a new challenge domain with novel problems that arise from its combination of purely cooperative gameplay and imperfect information in a two to five player setting. In particular, we argue that Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground. We believe developing novel techniques capable of imbuing artificial agents with such theory of mind will not only be crucial for their success in Hanabi, but also in broader collaborative efforts, and especially those with human partners.”
Contest Models Highlight Inherent Inefficiencies Of Scientific Funding Competitions”, by Gross and Bergstrom
“Scientific research funding is allocated largely through a system of soliciting and ranking competitive grant proposals. In these competitions, the proposals themselves are not the deliverables that the funder seeks, but instead are used by the funder to screen for the most promising research ideas. Consequently, some of the funding program’s impact on science is squandered because applying researchers must spend time writing proposals instead of doing science. To what extent does the community’s aggregate investment in proposal preparation negate the scientific impact of the funding program? Are there alternative mechanisms for awarding funds that advance science more efficiently? We use the economic theory of contests to analyze how efficiently grant proposal competitions advance science, and compare them with recently proposed, partially randomized alternatives such as lotteries.

“We find that the effort researchers waste in writing proposals may be comparable to the total scientific value of the research that the funding supports, especially when only a few proposals can be funded. Moreover, when professional pressures motivate investigators to seek funding for reasons that extend beyond the value of the proposed science (e.g., promotion, prestige), the entire program can actually hamper scientific progress when the number of awards is small. We suggest that lost efficiency may be restored either by partial lotteries for funding or by funding researchers based on past scientific success instead of proposals for future work.”
Bigger Teams Aren't Always Better in Science And Tech”, Science Daily, 13Feb19


“The Challenge of Overcoming the “End of Science”: How can we improve R&D processes when they are so poorly defined?”, by Dr. Jeffrey Funk
Both of these articles provide further information about root causes of the apparent slowdown in the productivity of R&D.

“In today's science and business worlds, it's increasingly common to hear that solving big problems requires a big team. But a new analysis of more than 65 million papers, patents and software projects found that smaller teams produce much more disruptive and innovative research…large teams, more often develop and consolidate existing knowledge.”

Funk “discusses the falling productivity of R&D, the limitations of existing terms, concepts and theories of R&D, and the necessity of better defining R&D processes before the falling productivity of them can be reversed.” This is a very thought provoking paper that is well worth a read.
Reverberations continue from EU regulators are taking aim at technology platform business models.

“The Google GDPR Fine: Some Thoughts”, by Michael Mandel

“The GDPR will mean big changes in the way that European and U.S. companies do business in Europe. As we noted at a recent privacy panel, rather than being a matter of speculation, its economic impact has become an empirical question. Will the tighter privacy protections of the GDP slow growth and innovation, as skeptics claim, or will these provisions increase consumer trust and usher in a new era of European digital gains, as supporters say? We await the answers to these questions with great interest.

“However, the enforcement stage of the GDPR has not gotten off on the right foot. CNIL, the French National Data Protection Commission, just fined Google 50 million euros for what they called ‘lack of transparency, inadequate information and lack of valid consent regarding the ads’ personalization.’ The fines were based in part on complaints filed by privacy groups on May 25, 2018, the very day that the GDPR went into effect. Moreover, the complaints were filed in France, despite that fact that Google’s European headquarters are in Ireland.

“The location of the complaints is relevant because the most straightforward reading of the GDPR’s “one-stop-shop” principle suggests that the location of a company’s European headquarters is the main factor determining the company’s lead regulator for GDPR purposes. That’s not the only criterion, for sure, but it was only natural for the Irish Data Protection Commission to take the lead role in regulating Google.

“The fact that the privacy organizations filed their complaints with France, not Ireland, suggests that they were forum-shopping–looking for a country which would look favorably on the issues they raised. Moreover, France’s willingness to jump to the front of the regulator queue suggests that they were interested in setting a precedent, rather than letting the GDPR process unfold.

“Finally, an important part of the rationale behind the GDPR was to further move towards a digital single market, by allowing companies to only deal with a single privacy regulator. If other countries follow France’s lead and find reasons to levy data protection-related fines on multinationals that have their European headquarters elsewhere, then the GDPR will end up fragmenting markets, rather than making them more consistent. That’s a losing proposition for everyone.”

German Regulators Just Outlawed Facebook's Whole Ad Business”, Wired 7Feb19

“Germany’s Federal Cartel Office, the country’s antitrust regulator, ruled that Facebook was exploiting consumers by requiring them to agree to this kind of data collection in order to have an account, and has prohibited the practice going forward. Facebook has one month to appeal.”
Disinformation and Fake News: Final Report”, by the UK House of Commons
“We have always experienced propaganda and politically-aligned bias, which purports to be news, but this activity has taken on new forms and has been hugely magnified by information technology and the ubiquity of social media. In this environment, people are able to accept and give credence to information that reinforces their views, no matter how distorted or inaccurate, while dismissing content with which they do not agree as ‘fake news’. This has a polarising effect and reduces the common ground on which reasoned debate, based on objective facts, can take place. Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened….

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight. But only governments and the law are powerful enough to contain them. The legislative tools already exist. They must now be applied to digital activity, using tools such as privacy laws, data protection legislation, antitrust and competition law. If companies become monopolies they can be broken up, in whatever sector.”
Priority Challenges for Social and Behavioral Research and Its Modeling”, by Davis et al from RAND


“Uncertainty Analysis to Better Confront Model Uncertainty”, by Davis and Popper from RAND
“Social-behavioral (SB) modeling is famously hard. Three reasons merit pondering:

First, Complex adaptive systems. Social systems are complex adaptive systems (CAS) that need to be modeled and analyzed accordingly—not with naïve efforts to achieve accurate and narrow predictions, but to achieve broader understanding, recognition of patterns and phases, limited forms of prediction, and results shown as a function of context and other assumptions.

Great advances are needed in understanding the states of complex adaptive systems and their phase spaces and in recognizing both instabilities and opportunities for influence.
Second, Wicked problems. Many social-behavioral issues arise in the form of so-called wicked problems— i. e., problems with no a priori solutions and with stakeholders that do not have stable objective functions. Solutions, if they are found at all, emerge from human interactions.

Third, Structural dynamics. The very nature of social systems is often structurally dynamic in that structure changes may emerge after interactions and events. This complicates modeling…

“The hard problems associated with CAS need not be impossible. It is not a pipe dream to imagine valuable SB modeling at individual, organizational, and societal scales. After all, complex adaptive systems are only chaotic in certain regions of their state spaces. Elsewhere a degree of prediction and influence is possible. We need to recognize when a social system is or is not controllable…

As for problem wickedness, it should often be possible to understand SB phenomena well enough to guide actions that increase the likelihood of good developments and reduce the likelihood of bad ones. Consider how experienced negotiators can facilitate eventual agreements between nations, or between companies and unions, even when emotions run high and no agreement exists initially about endpoints. Experience helps, and model-based analysis can help to anticipate possibilities and design strategies. Given modern science and technology, opportunities for breakthroughs exist, but they will not come easily….

To improve SB modeling, we need to understand obstacles, beginning with shortcomings of the science that should underlie it. Current SB theories are many, rich, and informative, but also narrow and fragmented. They do not provide the basis for systemic SB modeling. More nearly comprehensive and coherent theories are needed, but current disciplinary norms and incentives favor continued narrowness and fragmentation. No ultimate “grand theory” is plausible, but a good deal of unification is possible with various domains…

To represent social-behavioral theory requires increased emphasis on causal models (rather than statistical models) and on uncertainty-sensitive models that routinely display results parametrically in the multiple dimensions that define context. That is, we need models that help us with causal reasoning under uncertainty.”

In the other RAND study on uncertainty, Davis and Popper note that, "the traditional focus of uncertainty analysis has been on model parameter uncertainty and irreducible variability (randomness), but as systems and problems have become more complex, uncertainty about the underlying conceptual model and its specification in code have become much more important. Better addressing model uncertainty is likely key to reducing policymaker skepticism about value of modeling results in decisionmaking. Their paper provides concrete recommendations for how to better address model related uncertainty.
Causal Effect Identification from Multiple Incomplete Data Sources: A General Search-based Approach”, by Tikka et al
This is a very technical paper, but highlights use of Judea Pearl’s do-calculus in automated search for causal relationships in a large data set. As such, it is a key indicator of AI progress in the critical area of causal (and counterfactual) modeling.
This Is Why AI Has Yet To Reshape Most Businesses” by Brian Bergstein in MIT Technology Review
See also, “AI Adoption Advances, But Foundational Barriers Remain” by McKinsey
“Despite what you might hear about AI sweeping the world, people in a wide range of industries say the technology is tricky to deploy. It can be costly. And the initial payoff is often modest. It’s one thing to see breakthroughs in artificial intelligence that can outplay grandmasters of Go, or even to have devices that turn on music at your command. It’s another thing to use AI to make more than incremental changes in businesses that aren’t inherently digital…

Gains have been largest at the biggest and richest companies, which can afford to spend heavily on the talent and technology infrastructure necessary to make AI work well…algorithms are a small part of what matters. Far more important are organizational elements that ripple from the IT department all the way to the front lines of a business…All this requires not just money but also patience, meticulousness, and other quintessentially human skills that too often are in short supply

The McKinsey article notes that faster digitization is a critical enabler of faster AI deployment.
Companies Are Failing in Their Efforts to Become Data-Driven”, by Randy Bean and Thomas H. Davenport
“An eye-opening 77% of executives report that business adoption of Big Data/AI initiatives is a major challenge, up from 65% last year…

“72% of survey participants report that they have yet to forge a data culture

69% report that they have not created a data-driven organization

53% state that they are not yet treating data as a business asset

52% admit that they are not competing on data and analytics.”

“These sobering results and declines come in spite of increasing investment in big data and AI initiatives...Critical obstacles still must be overcome before companies begin to see meaningful benefits from their big data and AI investments...

“Executives who responded to the survey say that the challenges to successful business adoption do not appear to stem from technology obstacles; only 7.5% of these executives cite technology as the challenge. Rather, 93% of respondents identify people and process issues as the obstacle. Clearly, the difficulty of cultural change has been dramatically underestimated in these leading companies — 40.3% identify lack of organization alignment and 24% cite cultural resistance as the leading factors contributing to this lack of business adoption.”
OpenAI’s new Language Model is so powerful that its source code will not be released because of its potential for producing highly realistic fake news
From the OpenAI press release:

“Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText.”

For the full paper, see, “Language Models are Unsupervised Multitask Learners” by Radford et al

Also, “Strategies for Structuring Story Generation” by Fan et al
World Discovery Models” by Azar et al from DeepMind
This paper is yet another indicator of the underappreciated speed at which various AI technologies and applications are developing.

“The underlying process of discovery in humans is complex and multifaceted. However, one can identify two main mechanisms for discovery. The first mechanism is active information seeking. One of the primary behaviours of humans is their attraction to novelty (new information) in their world. The human mind is very good at distinguishing between the novel and the known, and this ability is partially due to the extensive internal reward mechanisms of surprise, curiosity and excitement

The second mechanism is building a statistical world model. Within cognitive neuroscience, the theory of statistical predictive mind states that the brain, like scientists, constructs and maintains a set of hypotheses over its representation of the world. Upon perceiving a novelty, our brain has the ability to validate the existing hypothesis, reinforce the ones that are compatible with the new observation and discard the incompatible ones. This self-supervised process of hypothesis building is essentially how humans consolidate their ever-growing knowledge in the form of an accurate and global model…”

“The outstanding ability of the human mind for discovery has led to many breakthroughs in science, art and technology. Here we investigate the possibility of building an agent capable of discovering its world using the modern AI technology…

We introduce NDIGO, Neural Differential Information Gain Optimisation, a self-supervised discovery model that aims at seeking new information to construct a global view of its world from partial and noisy observations. Our experiments on some controlled 2-D navigation tasks show that NDIGO outperforms state-of-the-art information-seeking methods in terms of the quality of the learned representation. The improvement in performance is particularly significant in the presence of white or structured noise where other information-seeking methods follow the noise instead of discovering their world.”
Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents”, by Letbo et al from DeepMind
This is a key indicator of advances in developing an AI based “theory of mind” that will enable artificial agents to better understand, and anticipate, the actions of human agents with whom they either partner or compete.

“Psychlab is a simulated psychology laboratory inside the first-person 3D game world of DeepMind Lab. Psychlab enables implementations of classical laboratory psychological experiments so that they work with both human and artificial agents. Psychlab has a simple and flexible API that enables users to easily create their own tasks. As examples, we are releasing Psychlab implementations of several classical experimental paradigms including visual search, change detection, random dot motion discrimination, and multiple object tracking.

We also contribute a study of the visual psychophysics of a specific state-of-the-art deep reinforcement learning agent: UNREAL. This study leads to the surprising conclusion that UNREAL learns more quickly about larger target stimuli than it does about smaller stimuli. In turn, this insight motivates a specific improvement in the form of a simple model of foveal vision that turns out to significantly boost UNREAL’s performance, both on Psychlab tasks, and on standard DeepMind Lab tasks. By open-sourcing Psychlab we hope to facilitate a range of future such studies that simultaneously advance deep reinforcement learning and improve its links with cognitive science.”
“The Productivity Imperative For Healthcare Delivery In The United States”, by McKinsey
This new report examines how productivity in the US healthcare delivery industry evolved between 2001 and 2016.

“There is little doubt that the trajectory of healthcare spending in the United States is worrisome and perhaps unsustainable. Underlying this spending is the complex system used to deliver healthcare services to patients. Given that the US currently expends 18% of its gross domestic product (GDP) on healthcare, this system might be expected to deliver high-quality, affordable, and convenient patient care—yet it often fails to achieve that goal…

“One explanation, however, has largely been overlooked: poor productivity in the healthcare delivery industry. In practical terms, increased productivity in healthcare delivery would make it possible to continue driving medical advances and meet the growing demand for services while improving affordability (and likely maintaining current employment and wages)…

Job creation—not labor productivity gains—was responsible for most of the growth in the US healthcare delivery industry from 2001 to 2016. Innovation, changes in business practices, and the other variables that typically constitute Multifactor Productivity Growth, harmed the industry’s growth. If the goal is to control healthcare spending growth, both trends must change…

“The impact of improving productivity would be profound. Our conservative estimates suggest that if the healthcare delivery industry could rely more heavily on labor productivity gains rather than workforce expansion to meet demand growth, by 2028 healthcare spending could potentially be (on a nominal basis) about $280 billion to $550 billion less than current national health expenditures (NHE) projections suggest...

Cumulatively, $1.2 trillion to $2.3 trillion could be saved over the next decade if healthcare delivery were to move to a productivity-driven growth model. Savings of this magnitude would bring the rise in healthcare spending in line with—and possibly below—GDP growth. In addition, the increased labor productivity in healthcare delivery would boost overall US economic growth at a faster rate than current projections—an incremental 20 to 40 basis points (bps) per annum—both through direct economic growth and the spillover impact of greater consumption in other industries. However, meaningful action by, and collaboration among, all stakeholders will be needed to deliver this value.”
Jan19: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
DeepMind’s AlphaStar software has for the first time defeated a top ranked professional player five games to zero in the real time strategy game Starcraft II.
As DeepMind notes, “until now, AI techniques have struggled to cope with the complexity of StarCraft…

The need to balance short and long-term goals and adapt to unexpected situations, poses a huge challenge…Mastering this problem required breakthroughs in several AI research challenges including:

Game theory: StarCraft is a game where, just like rock-paper-scissors, there is no single best strategy. As such, an AI training process needs to continually explore and expand the frontiers of strategic knowledge.

Imperfect Information: Unlike games like chess or Go where players see everything, crucial information is hidden from a StarCraft player and must be actively discovered by “scouting”.

Long term planning: Like many real-world problems cause-and-effect is not instantaneous. Games can also take anywhere up to one hour to complete, meaning actions taken early in the game may not pay off for a long time.

Real time: Unlike traditional board games where players alternate turns between subsequent moves, StarCraft players must perform actions continually as the game clock progresses.

Large action space: Hundreds of different units and buildings must be controlled at once, in real-time, resulting in a huge combinatorial space of possibilities…

Due to these immense challenges, StarCraft has emerged as a “grand challenge” for AI research.”

DeepMind’s latest achievement is further evidence of the accelerating pace at which AI capabilities are improving. To be sure, Starcraft is still a discrete system, governed by an unchanging set of rules. In that sense, it critically differs from real world complex socio-technical systems, in which agents’ adaptive actions are not constrained by unchanging rules, and system dynamics evolve over time.

In real world complex adaptive systems, making sense of new information in an evolving context, inducing abstract concepts from novel situations, and then using them to rapidly reason about the dynamics of a situation and the likely impact of possible actions remain, for now, beyond the capabilities of the most advanced artificial intelligence systems.

In addition, some critics have noted that DeepStar’s victory owed more to the speed of its play relative to its very talented human opponent, and physical accuracy of its moves (placement of units on a map) than it did to superior strategy (e.g., see, “
An Analysis On How Deepmind’s Starcraft 2 AI’s Superhuman Speed is Probably a Band-Aid Fix For The Limitations of Imitation Learning”, by Aleksi Pietikainen).

But all that said, DeepStar’s success still reminds us that the gap between the capabilities of AI and human beings, even in cognitively challenging areas, is closing faster than many people appreciate
Quantum Terrorism” Collective Vulnerability of Global Quantum Systems” by Johnson et al.
Quantum computing, while exponentially more powerful than today’s technology, will also bring new vulnerabilities. The authors “show that an entirely new form of threat arises by which a group of 3 or more quantum-enabled adversaries can maximally disrupt the global quantum state of future systems in a way that is practically impossible to detect, and that is amplified by the way that humans naturally group into adversarial entities.”
Shoshana Zuboff’s new book, “The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power” crystallizes the often unspoken worries that many people have felt about exponentially improving artificial intelligence technologies.
Writing in the Financial Times Zuboff describes “a new economic logic [she] calls ‘surveillance capitalism’”. It “was invented in the teeth of the bust, when a fledgling company called Google decided to try and boost ad revenue by using its exclusive access to largely ignored data logs — the “digital exhaust” left over from users’ online search and browsing. The data would be analysed for predictive patterns that could match ads and users. Google would both repurpose the “surplus” behavioural data and develop methods to aggressively seek new sources of it…These operations were designed to bypass user awareness and, therefore, eliminate any possible “friction”. In other words, from the very start Google’s breakthrough depended upon a one-way mirror: surveillance…”

Surveillance capitalism soon migrated to Facebook and rose to become the default model for capital accumulation in Silicon Valley, embraced by every start-up and app. It was rationalised as a quid pro quo for free services but is no more limited to that context than mass production was limited to the fabrication of the Model T. It is now present across a wide range of sectors, including insurance, retail, healthcare, finance, entertainment, education and more. Capitalism is literally shifting under our gaze.”

“It has long been understood that capitalism evolves by claiming things that exist outside of the market dynamic and turning them into market commodities for sale and purchase. Surveillance capitalism extends this pattern by declaring private human experience as free raw material that can be computed and fashioned into behavioural predictions for production and exchange…”

“Surveillance capitalists produce deeply anti-democratic asymmetries of knowledge and the power that accrues to knowledge. They know everything about us, while their operations are designed to be unknowable to us. They predict our futures and configure our behaviour, but for the sake of others’ goals and financial gain. This power to know and modify human behaviour is unprecedented.

“Often confused with totalitarianism and feared as Big Brother, it is a new species of modern power that I call ‘instrumentarianism’. [This] power can know and modify the behaviour of individuals, groups and populations in the service of surveillance capital. The Cambridge Analytica scandal revealed how, with the right knowhow, these methods of instrumentarian power can pivot to political objectives. But make no mistake, every tactic employed by Cambridge Analytica was part of surveillance capitalism’s routine operations of behavioural influence.”

As the
Guardian noted in its review of her book, Zuboff, “points out that while most of us think that we are dealing merely with algorithmic inscrutability, in fact what confronts us is the latest phase in capitalism’s long evolution – from the making of products, to mass production, to managerial capitalism, to services, to financial capitalism, and now to the exploitation of behavioural predictions covertly derived from the surveillance of users.”

“The combination of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. This has profound consequences for democracy because asymmetry of knowledge translates into asymmetries of power. But whereas most democratic societies have at least some degree of oversight of state surveillance, we currently have almost no regulatory oversight of its privatised counterpart. This is intolerable.”
In light of Zuboff’s book, the provocatively titled article (The French Fine Against Google is the Start of a War”) in the 24Jan19 Economist does not seem excessive.
“On January 21st France’s data-protection regulator, which is known by its French acronym, CNIL, announced that it had found Google’s data-collection practices to be in breach of the European Union’s new privacy law, the General Data Protection Regulation (GDPR). CNIL hit Google with a €50m ($57m), the biggest yet levied under GDPR. Google’s fault, said the regulator, had been its failure to be clear and transparent when gathering data from users…”

“The fine represents the first volley fired by European regulators at the heart of the business model on which Google and many other online services are based, one which revolves around the frictionless collection of personal data about customers to create personalised advertising. It is the first time that the data practices behind Google’s advertising business, and thus those of a whole industry, have been deemed illegal. Google says it will appeal against the ruling. Its argument will not be over whether consent is required to collect personal data—it agrees that it is—but what quality of consent counts as sufficient…Up to now the rules that underpin the digital economy have been written by Google, Facebook et al. But with this week’s fine that is starting to change.”

The growing public anger in the West over reduced privacy that both Zuboff’s book and the CNIL fine represent has important implications for the race to create ever more powerful machine learning/artificial intelligence capabilities, whose advancement is critically dependent on access to large amounts of training data. In China, data privacy is not an issue. In Europe, it is a very serious issue today. The US currently lies somewhere in between.

While emerging technologies like Generative Adversarial Networks may in future be used to quickly generate high quality simulated data that can be used to train AI, we aren’t there yet. Until we are, the data privacy issue will be inextricably linked to the pace of AI development, which in turn has national security, as well as economic and social implications.
We analyzed 16,625 papers to figure out where AI is headed next” by Karen Hao, in MIT Technology Review, 25Jan19
“The sudden rise and fall of different techniques has characterized AI research for a long time, he says. Every decade has seen a heated competition between different ideas. Then, once in a while, a switch flips, and everyone in the community converges on a specific one. At MIT Technology Review, we wanted to visualize these fits and starts. So we turned to one of the largest open-source databases of scientific papers, known as the Arxiv (pronounced “archive”). We downloaded the abstracts of all 16,625 papers available in the “artificial intelligence” section through November 18, 2018, and tracked the words mentioned through the years to see how the field has evolved…”

”We found three major trends: a shift toward machine learning during the late 1990s and early 2000s, a rise in the popularity of neural networks beginning in the early 2010s, and growth in reinforcement learning in the past few years…
The biggest shift we found was a transition away from knowledge-based systems by the early 2000s. These computer programs are based on the idea that you can use rules to encode all human knowledge. In their place, researchers turned to machine learning—the parent category of algorithms that includes deep learning…Instead of requiring people to manually encode hundreds of thousands of rules, this approach programs machines to extract those rules automatically from a pile of data. Just like that, the field abandoned knowledge-based systems and turned to refining machine learning…”

“In the few years since the rise of deep learning, our analysis reveals, a third and final shift has taken place in AI research. As well as the different techniques in machine learning, there are three different types: supervised, unsupervised, and reinforcement learning. Supervised learning, which involves feeding a machine labeled data, is the most commonly used and also has the most practical applications by far. In the last few years, however, reinforcement learning, which mimics the process of training animals through punishments and rewards, has seen a rapid uptick of mentions in paper abstracts… [The pivotal] moment came in October 2015, when DeepMind’s AlphaGo, trained with reinforcement learning, defeated the world champion in the ancient game of Go. The effect on the research community was immediate…

Our analysis provides only the most recent snapshot of the competition among ideas that characterizes AI research. But it illustrates the fickleness of the quest to duplicate intelligence… Every decade, in other words, has essentially seen the reign of a different technique: neural networks in the late ’50s and ’60s, various symbolic approaches in the ’70s, knowledge-based systems in the ’80s, Bayesian networks in the ’90s, support vector machines in the ’00s, and neural networks again in the ’10s. The 2020s should be no different, meaning the era of deep learning may soon come to an end.”
“It’s Still the Prices, Stupid”, by Anderson et all in Health Affairs
As we have noted, healthcare and education are critical “social technologies”, particularly in a period of rapid change and heightened uncertainty about employment (which, for many Americans, is the source of their health insurance). Improving the effectiveness, efficiency, and adaptability of both these technologies will have a critical impact on the economy, society, and politics in the future.

The authors of this article update a famous 2003 article titled “It’s the Prices, Stupid”, which “found that the sizable differences in health spending between the US and other countries were explained mainly by health care prices.”

The authors of the present article find that, “The conclusion that prices are the primary reason why the US spends more on health care than any other country remains valid, despite health policy reforms and health systems restructuring that have occurred in the US and other industrialized countries since the 2003 article’s publication. On key measures of health care resources per capita (hospital beds, physicians, and nurses), the US still provides significantly fewer resources compared to the OECD median country. Since the US is not consuming greater resources than other countries, the most logical factor is the higher prices paid in the US.”
On the education front, Colorado recently updated its “Talent Pipeline” Report, which provides a stark reminder of how poorly the US education system is performing, even in a state with the nation’s second highest percentage of residents with a bachelors degree or higher (about 40%).
Out of 100 students who complete 9th grade, 70 graduate from high school on time, 43 enroll in college that autumn, 32 return after their first year of college, and just 25 graduate from college within six years of starting it.

Results like these have two critical implications. First, they point to stagnant or declining human capital, which is a key driver of total factor productivity and thus long-term growth, particularly as the economy becomes more knowledge intensive. Second, at a time when the capabilities of labor substituting technologies (like AI and automation) have been improving exponentially, the failure of human capital to keep pace will naturally induce businesses to invest more in the former and less in the latter, which would likely produce worsening economic inequality, rising unemployment, and skyrocketing government spending on social safety net programs – which will have to be paid for wither with higher taxes or unlikely cuts in other spending.
Dec18: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
Profiling for IQ Opens Up New Uber-Parenting Possibilities”, Financial Times, 22Nov18

“A US start-up, Genomic Prediction, claims it can genetically profile embryos to predict IQ, as well as height and disease risk. Since fertility treatment often produces multiple viable embryos, only one or two of which can be implanted, prospective parents could pick those with the “best” genes.” The FT notes the implications of this technology development: “We are sliding into Gattaca territory, in which successive generations are selected not only for health but also for beauty, intellect, stature and other aptitudes. Parents may even think it their moral duty to choose the “best” possible baby, not just for themselves but to serve the national interest. Carl Shulman and Nick Bostrom, from the Future of Humanity Institute at Oxford university, predicted in 2013 that some nations may pursue the idea of a perfectible population to gain economic advantage. China, incidentally, has long been reading the genomes of its cleverest students.” If this technology is scaled up, the economic, national security, social, and political implications will be both profound and highly disruptive.
Natural Language Understanding Poised to Transform How We Work”, Financial Times, 3Dec19
The FT notes, “If language understanding can be automated in a wide range of contexts, it is likely to have a profound effect on many professional jobs. Communication using written words plays a central part in many people’s working lives. But it will become a less exclusively human task if machines learn how to extract meaning from text and churn out reports.” Up to now, an obstacle “in training neural networks to do the type of work analysts face — distilling information from several sources — is the scarcity of appropriate data to train the systems. It would require public data sets that include both source documents and a final synthesis, giving a complete picture that the system could learn from. [However], despite challenges such as these, recent leaps in natural language understanding (NLU) have made these systems more effective and brought the technology to a point where it is starting to find its way into many more business applications.”

The article focuses on technology from Primer (, a new AI start-up, that has developed more capable natural language understanding technology that is now in use by intelligence agencies.
Why Companies that Wait to Adopt AI May Never Catch Up”, Harvard Business Review, by Mahidhar and Davenport, 3Dec18

If the authors’ hypothesis is correct, then AI will lead to the intensification of “winner take all” markets, with more companies and business models struggling to earn economic profits. Research has shown that this will also lead to worsening income inequality between employees at the winning companies and everyone else.
Your Smartphone’s AI Algorithms Could Tell if You are Depressed”, MIT Technology Review, 3Dec18
Reports on a Stanford study (“Measuring Depression Symptom Severity from Spoken Langage and 3D Facial Expresisons” by Haque et al) that used “a combination of facial expressions, voice tone, and use of specific words was used to diagnose depression”, with 80% accuracy. This could well turn out to be a two edged sword, with clear benefits for early diagnosis and treatment of mental illness, but equally important concerns about privacy (e.g., its use by employers or insurance companies).
The Artificial Intelligence Index 2018 Annual Report
This report provides a range of excellent benchmarks for measuring the rate of improvement for various AI technologies, and their adoption across industries. Key findings include substantial shortening of training times (e.g., for visual recognition tasks), and the increasing rate of improvement for natural language understanding based on the GLUE benchmark.
Learning from the Experts: From Expert Systems to Machine Learned Diagnosis Models” by Ravuri et al
This paper describes how a model that embodies the knowledge of domain experts was used to generate artificial (synthetic) data about a system that was then used to train a deep learning network. This is an interesting approach that bears monitoring, particularly its potential future application to agent based modeling of complex adaptive systems.
How Artificial Intelligence will Reshape the Global Order: The Coming Competition Between Digital Authoritarianism and Liberal Democracy”, by Nicholas Wright

A thought provoking forecast of how developing social control technologies could affect domestic politics and strengthen authoritarian governments.
Data Breaches Could Cause Users to Opt Out of Sharing Personal Data. Then What?” by Douglas Yeung from RAND
“If the public broadly opts out of using tech tools…insufficient or unreliable user data could destabilize the data aggregation business model that powers much of the tech industry. Developers of technologies such as artificial intelligence, as well as businesses built on big data, could not longer count on ever-expanding streams of data. Without this data, machine learning models would be less accurate.”
Arguably, with its General Data Protection Regulation (GDPR), the European Union has already moved in this direction. While the author focuses on commercial issues, there are also national security implications if China – where data privacy is not recognized as a legitimate concern – is able to develop superior AI applications because of access to a richer set of training data.
Parents 2018: Going Beyond Good Grades”, a report by Learning Heroes and Edge Research

Improving education, and more broadly the quality of a nation’s human capital, is critical to improving employment, productivity and economic growth and reducing income inequality. But no system, team, or individual can improve (except by random luck) in the absence of accurate feedback. And this new report makes painfully clear that this is too often missing in America’s K-12 education system.

The report begins with the observation that, “parents have high aspirations for their children. Eight in 10 parents think it’s important for their child to earn a college degree, with African-American and Hispanic parents more likely to think it’s absolutely essential or very important. Yet if students are not meeting grade-level expectations, parents’ aspirations and students’ goals for themselves are unlikely to be realized. Today, nearly 40% of college students take at least one remedial course; those who do are much more likely to drop out, dashing both their and their parents’ hopes for the future…

Over three years, one alarming finding has remained constant: Nearly 9 in 10 parents, regardless of race, income, geography, and education levels, believe their child is achieving at or above grade level. Yet national data indicates only about one-third of students actually perform at that level. In 8th grade mathematics, while 44% of white students scored at the proficient level on the National Assessment of Educational Progress in 2017, only 20% of Hispanic and 13% of African-American students did so. This year, we delved into the drivers of this “disconnect.” We wanted to understand why parents with children in grades 3-8 hold such a rosy picture of their children’s performance and what could be done to move them toward a more complete and accurate view…

Report Cards Sit at the Center of the Disconnect: Parents rely heavily on report card grades as their primary source of information and assume good grades mean their child is performing at grade level. Yet two-thirds of teachers say report cards also reflect effort, progress, and participation in class, not just mastery of grade-level content… More than 6 in 10 parents report that their child receives mostly A’s and B’s on their report card, with 84% of parents assuming this indicates their child is doing the work expected of them at their current grade… Yet a recent study by TNTP found that while nearly two-thirds of students across five school systems earned A’s and B’s, far fewer met grade-level expectations on state tests. On the whole, students who were earning B’s in math and English language arts had less than a 35% chance of having met the grade-level bar on state exams.”
Nov18: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
“Deep Learning can Replicate Adaptive Traders in a Limit-Order Book Financial Market”, by Calvez and Cliff
“Successful human traders, and advanced automated algorithmic trading systems, learn from experience and adapt over time as market conditions change…We report successful results from using deep learning neural networks (DLNNs) to learn, purely by observation, the behavior of profitable traders in an electronic market… We also demonstrate that DLNNs can learn to perform better (i.e., more profitably) than the trader that provided the training data. We believe that this is the first ever demonstration that DLNNs can successfully replicate a human-like, or super-human, adaptive trader.”

This is a significant development. Along with similar advances in reinforcement learning (e.g., by Deep Mind with AlphaZero), one can easily envision a situation where – at least over short time frames – most humans completely lose their edge over algorithms.

The good news (form humans at least) is that over longer time frames, the structure of the system evolves (and becomes less discrete), and performance becomes more dependent on higher forms of reasoning – causal and counterfactual – where humans are still far ahead of algorithms (and whose sensemaking, situation awareness, and decision making The Index Investor is intended to support ).
Social media cluster dynamics create resilient global hate highways”, by Johnson et al
“Online social media allows individuals to cluster around common interests -- including hate. We show that tight-knit social clusters interlink to form resilient ‘global hate highways’ that bridge independent social network platforms, countries, languages and ideologies, and can quickly self-repair and rewire. We provide a mathematical theory that reveals a hidden resilience in the global axis of hate; explains a likely ineffectiveness of current control methods; and offers improvements…”
The Semiconductor Industry and the Power of GlobalizationThe Economist 1Dec18
“If data are the new oil...chips are what turn them into something useful.” This special report provides a good overview of how the critical and highly globalized semiconductor supply chain is coming under increased pressure as competition between China and the United States intensifies.
There’s a Reason Why Teachers Don’t Use the Software Provided by Their Districts”, by Thomas Arnett

We have noted in the past that education (like healthcare) is a critical social technology where substantial performance improvement is critical to increasing future rates of national productivity growth and reducing inequality. This study is not encouraging with respect to the impact technology has been having on the education sector.

The authors find that, “a median of 70% of districts’ software licenses never get used, and a median of 97.6% of licenses are never used intensively.
Reports emerged from China that gene editing CRISPR technology to modify a human embryo’s DNA before implanting it in a woman’s womb via IVF. The initial focus was reportedly on producing children who are resistant to HIV, smallpox, and cholera.

While this has been recognized as a possibility, there was also a belief that it would not happen so quickly, or with so little control. It was also significant that the target of the DNA modification was resistance to smallpox, a disease which is believed to have been eradicated and whose causal agents are now only retained by governments (which makes them potentially very powerful biowar weapons).
Virtual Social Science” by Sefan Thurner.

Thurner is one of the world’s leading complex adaptive systems researchers, and anything he writes is usually rich with unique insights.

His latest paper is no exception. He reviews findings from the analysis of 14 years of extremely rich data from Pardus, a massive multiplayer online game (MMOG) involving about 430,000 players in which economic, social, and other decisions are made by humans, not algorithms.

This data can be used to develop and test a wide range of social science theories about the behavior of complex adaptive systems at various levels of aggregation, from the individual to the group to the system. It can also be used to evaluate agent-based, and AI driven approaches to predicting the future behavior of complex systems

The author shows how many of the findings from analyzing game data line up with experimental findings based on the behavior of far fewer subjects. This points the way towards a new and potentially much more powerful approach to social science.

However, Thurner also notes the current limits on the extent to which human societies can be understood, and their behavior predicted using this methodology: the inherent “co-evolutionary complexity” of complex adaptive social system, whose interactions cause structures to change over time, often in a non-linear manner.
Oct18: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?

Using Machine Learning to Replicate Chaotic Attractors”, by Pathak et al

Advances in a machine learning area known as “reservoir computing” have led to the creation of a model that reproduced the dynamics of a complex dynamical system. If this initial work can be extended it represents a significant advance. That said, this is not the same thing as AI learning and being able to reproduce and predict the dynamic behavior of a complex adaptive system, such as financial markets, and economies.
The Impact of Bots on Opinions in Social Networks” by Hjouji et al
Using both a model and data from the 2016 US presidential election, the authors conclude that “a small number of highly active bots in a social network can have a disproportionate impact on opinion…due to the fact that bots post one hundred times more frequently than humans.” In theory, this should make it easier for platforms like Twitter and Facebook to identify and close down these bots. The authors also surprisingly found that in 2016 pro-Clinton bots produced opinion shifts that were almost twice as large as the pro-Trump bots, despite the latter being larger in number.
Learning-Adjusted Years of Schooling” by Filmer et al from the World Bank
This valuable new indicator metric combines both the time spent in school and how much is learned during that time. The authors find that LAYS is strongly correlated with GDP growth. They also find wide gaps between countries, with some education systems being much more productive (in terms of learning per unit of time) than others. The good news is that this points to a substantial source of future gains for these economies in total factor productivity, provided their education systems can be improved.
The Condition of College and Career Readiness, 2018” by ACT Inc.
More disappointing results based on a well-known indicator of US K-12 education system performance.

About three-fourths (76%) of 2018 ACT-tested graduates said they aspire to postsecondary education. Most of those students said they aspire to a four-year degree or higher. Only 27% met all four C&C ready benchmarks; 35% met none. Readiness levels in math have steadily declined since 2014. Sample size = 1.9m. “Just 26% of ACT-tested 2018 graduates likely have the foundational work readiness skills needed for more than nine out of 10 jobs recently profiled in the ACT JobPro® database. This has significant (and negative) implications for future productivity and wage growth.
Sep18: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
In a new book, “AI Superpowers”, Chinese venture capitalist Kai-Fu Lee makes an important point: There is a critical difference between AI innovation and AI implementation. Success in the latter depends on the ability to collect and analyze large amounts of data – and this is an area where China is outpacing the rest of the world, because of its size, its state capitalism model, low level of concern with privacy, and its data intensive approach to domestic security.
Provides a logical argument for how and why China could gain a significant advantage in key artificial intelligence technologies.
US House of Representatives Subcommittee on Information Technology published a new report titled “Rise of the Machines.” Highlights: “First, AI is an immature technology; its abilities in many areas are still relatively new. Second, the workforce is affected by AI; whether that effect is positive, negative, or neutral remains to be seen. Third, AI requires massive amounts of data, which may invade privacy or perpetuate bias, even when using data for good purposes. Finally, AI has the potential to disrupt every sector of society in both anticipated and unanticipated ways.”
Report’s conclusions are an interesting contrast to Kai-Fu Lee’s. Similar to critiques by Gary Marcus and Judea Pearl, it highlights the limitations of current AI technologies, which suggests we are further away from a critical threshold than many media reports would suggest. That said, it also agrees that once a critical threshold of AI capability is reached, it will have strong disruptive effects.

However, the report agrees with Lee that privacy concerns are a potentially important constraint on AI progress.
China is Overtaking the US in Scientific Research” by Peter Orzag in Bloomberg Opinion. Not just the quantity, but also “the quality of Chinese research is improving, though it currently remains below that of U.S. academics. A recent analysis suggests that, measured not just by numbers of papers but also by citations from other academics, Chinese scholars could become the global leaders in the near future.”
Suggests that the pace of technological improvement in China will accelerate.
Quantum Hegemony: China’s Ambitions and the Challenge to US Innovation Leadership”. Center for a New American Security. “China’s advances in quantum science could impact the future military and strategic balance, perhaps even leapfrogging traditional U.S. military-technological advantages. Although it is difficult to predict the trajectories and timeframes for their realization, these dual-use quantum technologies could “offset” key pillars of U.S. military power, potentially undermining critical technological advantages associated with today’s information-centric ways of war, epitomized by the U.S. model.”
Highlights a key area in which faster Chinese technological progress and breakthroughs could confer substantial military advantage.
A Storm in an IoT Cup: The Emergence of Cyber-Physical Social Machines” by Madaan et al. “The concept of ‘social machines’ is increasingly being used to characterize various socio-cognitive spaces on the Web. Social machines are human collectives using networked digital technology, which initiate real-world processes and activities including human communication, interactions and knowledge creation. As such, they continuously emerge and fade on the Web. The relationship between humans and machines is made more complex by the adoption of Internet of Things (IoT) sensors and devices. The scale, automation, continuous sensing, and actuation capabilities of these devices add an extra dimension to the relationship between humans and machines making it difficult to understand their evolution at either the systemic or the conceptual level. This article describes these new socio-technical systems, which we term Cyber-Physical Social Machines.”
Increasing complexity creates exponentially more hidden critical thresholds, and ways for a system to generate non-linear effects.
Notes From the Frontier: Modeling the Impact of AI on the World Economy”, McKinsey Global Institute. Adoption of AI could increase annual global GDP growth by 1.2%. Adoption of AI technologies and emergence of their impact is following typical “S-Curve” pattern. At this point, “the absence of evidence is not evidence of absence” of its potential impact.
Excellent analysis of the current state of AI development, rate of adoption, and range of observed effects.
Critical Point: “Because economic gains combine and compound over time…a key challenge is that adoption of AI could widen gaps between countries, companies, and workers.”
Blueprint: How DNA Makes Us Who We Are” by Robert Plomin. Argues that genetic differences cause most variation in human psychological traits. Accumulating evidence for the dominance of nature over nurture has many potentially disruptive implications.

See also, "Top 10 Replicated Findings from Behavioral Genetics" by Plomin et al
Surprise. The implications of the body of research this book compiles and synthesizes has enormous disruptive potential, at the economic, social, and ultimately political level.