The workshop will have several invited plenary speakers from academia and industry.
|June 1, 2017||8:30-9:00 AM||Tuomas Sandholm, Carnegie Mellon Computer Science|
|9:00-9:30 AM||Hal Varian, Google|
|11:30 AM-12:00 PM||Susan Athey, Stanford Graduate School of Business|
|12:00-12:30 PM||Asu Ozdaglar, MIT Electrical Engineering and Computer Science|
|June 2, 2017||8:30-9:00 AM||Garud Iyengar, Columbia Industrial Engineering and Operations Research|
|9:00-9:30 AM||Bob Phillips, Uber Technologies|
|11:30 AM-12:00 PM||Paat Rusmevichientong, USC Marshall School of Business|
|12:00-12:30 PM||Paul Milgrom, Stanford Economics|
|4:00-4:30 PM||Vivek Farias, MIT Sloan School of Management|
|4:30-5:00 PM||Ashish Goel, Stanford Management Science and Engineering|
|5:00-5:30 PM||Omar Besbes, Columbia Business School|
See below for details on each talk.
Tuomas Sandholm, Carnegie Mellon Computer ScienceTuomas Sandholm,
June 1, 2017, 8:30-9:00 AM
Abstract: In multi-item settings, it is typically unrealistic to assume that the designer has a prior over buyers’ valuations. We analyze revenue-maximizing sales mechanisms when the designer has samples from the prior, as introduced by Likhodedov and Sandholm in AAAI-04. We provide generalization guarantees that bound the difference between profit on the sample and profit over the unknown prior. Our overarching theorem uses Rademacher complexity to measure intrinsic complexity of widely-studied single- and multi-item auction classes, such as affine maximizers, virtual valuations combinatorial auctions, lambda-auctions, and mixed bundling auctions. It also applies to linear and non-linear single- and multi-item pricing. Furthermore, it enables one to determine the optimal complexity to select in mechanism hierarchies.
This is joint work with Nina Balcan and Ellen Vitercik.
Hal Varian, Google
June 1, 2017, 9:00-9:30 AM
Abstract: I describe a flexible model for estimating ad effectiveness using synthetic controls. The methods described can be used for many other kinds of prediction, causal inference, and anomaly detection.
Susan Athey, Stanford Graduate School of Business
June 1, 2017, 11:30 AM-12:00 PM
Abstract: Marketplaces and digital platforms make it easier for consumers to transact with a wide range of sellers. One implication of this trend is that market share may be redistributed from large firms to small firms or even individual entrepreneurs. The welfare consequences of this shift varies from industry to industry. It may cut out previously existing middlemen, and it may change the incentives to provide quality service. I review evidence about these trends in a variety of industries, and then focus on the case study of the news industry, where aggregators and intermediaries (such as Google News and Facebook) have profound effects on the news people read. I study the effect of the shutdown of Google News in Spain in December 2014, which occurred in response to legislation in Spain targeted at Google News. Evidence from the shutdown shows that Google News is a substitute for top news outlets and a complement for smaller outlets, consistent with the dual role of an aggregator as a search engine that covers a large number of news outlets, and as a direct competitor to the largest news outlets as a place to read news. I then present recent evidence of similar types of redistribution occurring among heavy Facebook users in the 2016 election.
Asu Ozdaglar, MIT Electrical Engineering and Computer Science
June 1, 2017, 12:00-12:30 PM
Abstract: Many online platforms present summaries of reviews by previous users. Even though such reviews could be useful, previous users leaving reviews are typically a selected sample of those who have purchased the good in question, and may consequently have a biased assessment. In this paper, we construct a simple model of dynamic Bayesian learning and profit-maximizing behavior of online platforms to investigate whether such review systems can successfully aggregate past information and the incentives of the online platform to choose the relevant features of the review system.
On the consumer side, we assume that each individual cares about the underlying quality of the good in question, but in addition has heterogeneous ex ante and ex post preferences (meaning that she has a different strength of preference for the good in question than other users, and her enjoyment conditional on purchase is also a random variable). After purchasing a good, depending on how much they have enjoyed it, users can decide to leave a positive or a negative review (or leave no review if they do not have strong preferences). New users observe a summary statistic of past reviews (such as fraction of all reviews that are positive or fraction of all users that have left positive review etc.). Our first major result shows that, even though reviews come from a selected sample of users, Bayesian learning ensures that as the number of potential users grows, the assessment of the underlying state converges almost surely to the true quality of the good. More importantly, we provide a tight characterization of the speed of learning (which is a contribution relative to most of the works in this area that focus on whether there is learning or not). Under the assumption that the online platform receives a constant revenue from every user that purchases (because of commissions from sellers or from advertising revenues), we then show that, in any Bayesian equilibrium, the profits of the online platform are a function of the speed of learning of users. Using this result, we study the design of the review system by the online platform.
This is joint work with Daron Acemoglu, Ali Makhdoumi, and Azarakhsh Malekian.
Garud Iyengar, Columbia Industrial Engineering and Operations Research
June 2, 2017, 8:30-9:00 AM
Abstract: Customers are exposed to multiple advertisements across many different channels before they make a purchase, i.e. convert. A key question facing the advertising industry is that of attributing portions of conversion to the various channels that potentially influenced the conversion. We propose a framework for attribution that accounts for budgets and marginal impact of each channel. We also evaluate current practices using this framework.
Joint work with Omar Besbes, Antoine Desir, and Vineet Goyal.
Bob Phillips, Uber Technologies
June 2, 2017, 9:00-9:30 AM
Abstract: Uber faces the challenge of balancing supply and demand in a two-sided marketplace in which it has no direct control over either side of the market. This poses a number of important questions. What does it mean for a market to be “balanced”? What does it mean to be “over- supplied” or “under-supplied” and what are the costs of being in either situation? What actions should be taken when a market is out of balance? We discuss some of the approaches that Uber uses to address these questions in its highly dynamic temporal-spatial marketplaces.
Paat Rusmevichientong, USC Marshall School of Business
June 2, 2017, 11:30 AM-12:00 PM
Abstract: Choice-based demand models based on the random utility maximization (RUM) principle are routinely used in academic literature and industry practice. However, the RUM principle may be violated in practice because customer preferences may not be rational. This raises the following empirical questions: (a) Given a dataset consisting of offer sets and individual choices, are the observed choice probabilities consistent with the RUM principle? (b) If not, what is the degree of inconsistency?
We formulate the problem of quantifying the limit of rationality (LoR) in choice modeling applications. Computing LoR is intractable in the worst case, but we identify the source of complexity through new concepts of rational separation and choice graph. By exploiting the graph structure, we provide practical methods to compute LoR efficiently for a large class of applications. Applying our methods to real-world grocery sales data, we identify product categories for which going beyond rational choice models is necessary to obtain acceptable performance.
Joint work with Srikanth Jagabathula (NYU).
Paul Milgrom, Stanford Economics
June 2, 2017, 12:00-12:30 PM
Abstract: The recent “incentive auction” of the U.S. Federal Communications Commission was the first auction to re-allocate radio frequencies between two different kinds of uses: from broadcast television to wireless Internet access. The design challenge was not just to choose market rules to govern a fixed set of potential trades, but also to determine the broadcasters’ property rights, the goods to be exchanged, the quantities to be traded, the computational procedures, and even some of the performance objectives. An essential and unusual challenge was to make the auction simple enough for human participants while still ensuring that the computations would be tractable and capable of delivering nearly efficient outcomes.
Vivek Farias, MIT Sloan School of Management
June 2, 2017, 4:00-4:30 PM
Abstract: Product and content personalization is now ubiquitous in e-commerce. Available transactional data is typically too sparse for this task. As such, firms today seek to use a variety of information on the interactions between a product and a customer to drive personalization decisions. We formalize this problem as one of recovering a large-scale matrix with side information in the form of additional matrices of conforming dimension. Viewing the matrix we seek to recover and the side information we have as slices of a tensor, we consider the problem of Slice Recovery, which is to recover specific slices of ‘simple’ tensors from noisy observations of the entire tensor. We provide an efficient algorithm for slice recovery that is practical for massive datasets and provides a significant performance improvement over state of the art incumbent approaches to tensor recovery. Further, we establish near-optimal recovery guarantees that in an important regime represent an order improvement over the best available results for this problem. Experiments on data from a music streaming service demonstrate the performance and scalability of our algorithm.
Ashish Goel, Stanford Management Science and Engineering
June 2, 2017, 4:30-5:00 PM
Abstract: YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism. Twitter has a similar relationship to news media, and Coursera to Universities. But there are no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe some algorithmic approaches towards large scale decision making that we are exploring. In particular, we will describe algorithms for voting in elections which design a budget, and for deliberative processes where a group decision in made via a succession of individual iteration (inspired by prediction markets) or small group interactions (inspired by Nash bargaining). We will also present general impossibility and fairness results for cardinal utilities given ordinal votes, under the metric assumption.
We will also describe our experience running crowdsourced democracy processes in the US, Canada, and Finland. Finally, we will outline several open algorithmic and game-theoretic problems in this space.
This represents joint work with Tanja Aitamurto, Brandon Fain, Nikhil Garg, Vijay Kamble, Anilesh Krishnaswamy, David Marn, Kamesh Munagala, and Sukolsak Sakshuwong.
Omar Besbes, Columbia Business School
June 2, 2017, 5:00-5:30 PM
Abstract: In many auctions, buyers can be represented by an intermediary that manages their bidding process, along with the bidding process of other buyers. Notably, in the real-time bidding market for online display advertising, in which advertisers bid for impressions through intermediaries called demand side platforms (DSPs), this is more the norm than the exception. In turn, intermediaries, when deciding what to bid on behalf of their customers, strategize to maximize some internal objective and may only submit a single bid to limit competition on a given item, leading to some form of collusion. In the present work, we propose a framework to analyze the implications of such an active/centralized role by DSPs in a general market. We take as a benchmark the case in which each DSP would manage the bidding process of each advertiser it represents independently of other buyers, a case we refer to as multi-bidding. We characterize the impact of the adoption of multi-bidding (together with intermediaries) in various regimes of interest and, quite remarkably, establish that multi-bidding leads to a Pareto improvement in the value chain under a very broad set of market characteristics. We discuss implications for market norms. (Joint work with A. Allouah.)