(Translated by https://www.hiragana.jp/)
nep-big 2019-11-18 papers
nep-big New Economics Papers
on Big Data
Issue of 2019‒11‒18
ten papers chosen by
Tom Coupé
University of Canterbury

  1. Machine learning, human experts, and the valuation of real assets By Aubry, Mathieu; Kräussl, Roman; Manso, Gustavo; Spaenjers, Christophe
  2. Predicting bank distress in the UK with machine learning By Suss, Joel; Treitel, Henry
  3. Money Neutrality, Monetary Aggregates and Machine Learning By Gogas, Periklis; Papadimitriou, Theophilos; Sofianos, Emmanouil
  4. Cross-country differences in the size of venture capital financing rounds: a machine learning approach By Marco Taboga
  5. Some HCI Priorities for GDPR-Compliant Machine Learning By Veale, Michael; Binns, Reuben; Van Kleek, Max
  6. Group Average Treatment Effects for Observational Studies By Daniel Jacob; Wolfgang Karl H\"ardle; Stefan Lessmann
  7. Scoping the OECD AI principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD (AIGO) By OECD
  8. LinkedIn(to) Job Opportunities: Experimental Evidence from Job Readiness Training By Wheeler, Laurel; Garlick, Robert; Johnson, Eric; Shaw, Patrick; Gargano, Marissa
  9. Big Tech Acquisitions and the Potential Competition Doctrine: The Case of Facebook By Mark Glick; Catherine Ruetschlin
  10. Tracking the Labor Market with "Big Data" By Tomaz Cajner; Leland Crane; Ryan Decker; Adrian Hamins-Puertolas; Christopher J. Kurz

  1. By: Aubry, Mathieu; Kräussl, Roman; Manso, Gustavo; Spaenjers, Christophe
    Abstract: We study the accuracy and usefulness of automated (i.e., machine-generated) valuations for illiquid and heterogeneous real assets. We assemble a database of 1.1 million paintings auctioned between 2008 and 2015. We use a popular machine-learning technique - neural networks - to develop a pricing algorithm based on both non-visual and visual artwork characteristics. Our out-of-sample valuations predict auction prices dramatically better than valuations based on a standard hedonic pricing model. Moreover, they help explaining price levels and sale probabilities even after conditioning on auctioneers' pre-sale estimates. Machine learning is particularly helpful for assets that are associated with high price uncertainty. It can also correct human experts' systematic biases in expectations formation - and identify ex ante situations in which such biases are likely to arise.
    Keywords: asset valuation,auctions,experts,big data,machine learning,computer vision,art
    JEL: C50 D44 G12 Z11
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:cfswop:635&r=all
  2. By: Suss, Joel (Bank of England); Treitel, Henry (Bank of England)
    Abstract: Using novel data and machine learning techniques, we develop an early warning system for bank distress. The main input variables come from confidential regulatory returns, and our measure of distress is derived from supervisory assessments of bank riskiness from 2006 through to 2012. We contribute to a nascent academic literature utilising new methodologies to anticipate negative firm outcomes, comparing and contrasting classic linear regression techniques with modern machine learning approaches that are able to capture complex non-linearities and interactions. We find the random forest algorithm significantly and substantively outperforms other models when utilising the AUC and Brier Score as performance metrics. We go on to vary the relative cost of false negatives (missing actual cases of distress) and false positives (wrongly predicting distress) for discrete decision thresholds, finding that the random forest again outperforms the other models. We also contribute to the literature examining drivers of bank distress, using state of the art machine learning interpretability techniques, and demonstrate the benefits of ensembling techniques in gaining additional performance benefits. Overall, this paper makes important contributions, not least of which is practical: bank supervisors can utilise our findings to anticipate firm weaknesses and take appropriate mitigating action ahead of time.
    Keywords: Machine learning; bank distress; early warning system
    JEL: C14 C33 C52 C53 G21
    Date: 2019–10–04
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0831&r=all
  3. By: Gogas, Periklis (Democritus University of Thrace, Department of Economics); Papadimitriou, Theophilos (Democritus University of Thrace, Department of Economics); Sofianos, Emmanouil (Democritus University of Thrace, Department of Economics)
    Abstract: The issue of whether or not money affects real economic activity (money neutrality) has attracted significant empirical attention over the last five decades. If money is neutral even in the short-run, then monetary policy is ineffective and its role limited. If money matters, it will be able to forecast real economic activity. In this study, we test the traditional simple sum monetary aggregates that are commonly used by central banks all over the world and also the theoretically correct Divisia monetary aggregates proposed by the Barnett Critique (Chrystal and MacDonald, 1994; Belongia and Ireland, 2014), both in three levels of aggregation: M1, M2, and M3. We use them to directionally forecast the Eurocoin index: A monthly index that measures the growth rate of the euro area GDP. The data span from January 2001 to June 2018. The forecasting methodology we employ is support vector machines (SVM) from the area of machine learning. The empirical results show that: (a) The Divisia monetary aggregates outperform the simple sum ones and (b) both monetary aggregates can directionally forecast the Eurocoin index reaching the highest accuracy of 82.05% providing evidence against money neutrality even in the short term.
    Keywords: Eurocoin; simple sum; Divisia; SVM; machine learning; forecasting; money neutrality
    JEL: E00 E27 E42 E51 E58
    Date: 2019–07–05
    URL: http://d.repec.org/n?u=RePEc:ris:duthrp:2016_004&r=all
  4. By: Marco Taboga (Bank of Italy)
    Abstract: We analyze the potential determinants of the size of venture capital financing rounds. We employ stacked generalization and boosted trees, two of the most powerful machine learning tools in terms of predictive power, to examine a large dataset on start-ups, venture capital funds and financing transactions. We find that the size of financing rounds is mainly associated with the characteristics of the firms being financed and with the features of the countries in which the firms are headquartered. Cross-country differences in the degree of development of the venture capital industry, while highly correlated with the size of funding rounds, are not significant once we control for other country-level characteristics. We discuss how our findings contribute to the debate about policy interventions aimed at stimulating start-up financing.
    Keywords: venture capital, financial institutions, country characteristics, machine learning
    JEL: G24 F0 C19
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1243_19&r=all
  5. By: Veale, Michael; Binns, Reuben; Van Kleek, Max
    Abstract: Cite as Michael Veale, Reuben Binns and Max Van Kleek (2018) Some HCI Priorities for GDPR-Compliant Machine Learning. The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI'18, 22 April 2018, Montreal, Canada. In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems. Focussing on those areas that relate to algorithmic systems in society, we propose roles for HCI in legal contexts in relation to fairness, bias and discrimination; data protection by design; data protection impact assessments; transparency and explanations; the mitigation and understanding of automation bias; and the communication of envisaged consequences of processing.
    Date: 2018–03–19
    URL: http://d.repec.org/n?u=RePEc:osf:lawarx:wm6yk&r=all
  6. By: Daniel Jacob; Wolfgang Karl H\"ardle; Stefan Lessmann
    Abstract: The paper proposes an estimator to make inference on key features of heterogeneous treatment effects sorted by impact groups (GATES) for non-randomised experiments. Observational studies are standard in policy evaluation from labour markets, educational surveys, and other empirical studies. To control for a potential selection-bias we implement a doubly-robust estimator in the first stage. Keeping the flexibility to use any machine learning method to learn the conditional mean functions as well as the propensity score we also use machine learning methods to learn a function for the conditional average treatment effect. The group average treatment effect is then estimated via a parametric linear model to provide p-values and confidence intervals. The result is a best linear predictor for effect heterogeneity based on impact groups. Cross-splitting and averaging for each observation is a further extension to avoid biases introduced through sample splitting. The advantage of the proposed method is a robust estimation of heterogeneous group treatment effects under mild assumptions, which is comparable with other models and thus keeps its flexibility in the choice of machine learning methods. At the same time, its ability to deliver interpretable results is ensured.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.02688&r=all
  7. By: OECD
    Abstract: Artificial intelligence (AI) is reshaping economies, promising to generate productivity gains, improve efficiency and lower costs. At the same time, AI is also fuelling anxieties and ethical concerns. As AI’s impacts permeate our societies, its transformational power must be put at the service of people and the planet. This document presents the work conducted by the Expert Group on Artificial Intelligence at the OECD (AIGO) to scope principles to foster trust in and adoption of AI. In particular, this paper presents a common understanding of what is an AI system as well as a framework that details the stages of the AI system lifecycle. This work informed the draft Recommendation of the Council on Artificial Intelligence. On 22 May 2019, the OECD Council adopted the Recommendation – also referred to as the OECD AI Principles – at the Ministerial level.
    Date: 2019–11–15
    URL: http://d.repec.org/n?u=RePEc:oec:stiaab:291-en&r=all
  8. By: Wheeler, Laurel (University of Alberta, Department of Economics); Garlick, Robert (Duke University); Johnson, Eric (RTI International); Shaw, Patrick (RTI International); Gargano, Marissa (RTI International)
    Abstract: Online professional networking platforms are widely used and offer the prospect of alleviating labor market frictions. We run the first randomized evaluation of training workseekers to join one of these platforms. Training increases employment at the end of the program from 70 to 77% and this effect persists for at least twelve months. Treatment effects on platform use explain most of the treatment effect on employment. Administrative data suggest that platform use increases employment by providing information to prospective employers and to workseekers. It may also facilitate referrals but does not reduce job search costs or change self-beliefs.
    Keywords: employment; information frictions; online platforms; social networks; field experiment
    JEL: J22 J23 J24 J64 M51 O15
    Date: 2019–09–12
    URL: http://d.repec.org/n?u=RePEc:ris:albaec:2019_014&r=all
  9. By: Mark Glick (University of Utah); Catherine Ruetschlin (University of Utah)
    Abstract: The Big Tech companies, including Google, Facebook, Amazon, Microsoft and Apple, have individually and collectively engaged in an unprecedented number of acquisitions. When a dominant firm purchases a start-up that could be a future entrant and thereby increase competitive rivalry, it raises a potential competition issue. Unfortunately, the antitrust law of potential competition mergers is ill-equipped to address tech mergers. We contend that the Chicago School`s assumptions and policy prescriptions hobbled antitrust law and policy on potential competition mergers. We illustrate this problem with the example of Facebook. Facebook has engaged in 90 completed acquisitions in its short history (documented in the Appendix to this paper). Many antitrust commentators have focused on the Instagram and WhatsApp acquisitions as cases of mergers that have reduced potential competition. We show the impotence of the potential competition doctrine applied to these two acquisitions. We suggest that the remedy for Chicago School damage to the potential competition doctrine is a return to an empirically tractable structural approach to potential competition mergers.
    Keywords: Antitrust Law, Big Tech Companies, Digital Markets, Mergers, Potential Competition Big Tech Acquisitions and the Potential Competition Doctrine: The Case of Facebook
    JEL: K21 L40 L86
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:thk:wpaper:104&r=all
  10. By: Tomaz Cajner; Leland Crane; Ryan Decker; Adrian Hamins-Puertolas; Christopher J. Kurz
    Abstract: In our research, we explore the information content of the ADP microdata alone by producing an estimate of employment changes independent from the BLS payroll series as well as from other data sources.
    Date: 2019–09–20
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfn:2019-09-20-1&r=all

This nep-big issue is ©2019 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.