Industry Event

As a new addition to the standard program of previous editions of the conference, the Industry Day section (separate from the industry research track) consists of talks given by visionaries from industry and from the VC (Venture Capital) community located (mostly) in Silicon Valley. The talks are meant to give the conference audience a sense of what it takes to move an idea from its initial write-up into a trend-changing success that improves the lives of millions of people around the world. Speakers and Panelists for Industry Day are by invitation only.

List of Speakers

Andrei Z. Broder (Yahoo! Research, USA)
Steven Gary Blank (Stanford University, USA)
Christopher J. C. Burges (Microsoft Research, USA)
William Chang (Baidu, China)
Doug Cutting (Yahoo!, USA)
Ronny Kohavi (Microsoft, USA)
Peter Norvig (Google, USA)
Daniel E. Rose (, USA)


Marius Pasca (Google, USA)
James G. (Jimi) Shanahan (Church and Duncan Group, Inc., USA)


Andrei Z. Broder

Affiliation: Fellow and VP of Computational Advertising, Yahoo! Research

Bio: Andrei Broder is a Fellow and Vice President for Computational Advertising in Yahoo! Research. He also serves as Chief Scientist of Yahoo's Advertising Technology Group. Previously he was an IBM Distinguished Engineer and the CTO of the Institute for Search and Text Analysis in IBM Research. From 1999 until 2002 he was Vice President for Research and Chief Scientist at the AltaVista Company. He was graduated Summa cum Laude from Technion, the Israeli Institute of Technology, and obtained his M.Sc. and Ph.D. in Computer Science at Stanford University under Don Knuth. His current research interests are centered on computational advertising, web search, context-driven information supply, and randomized algorithms. Broder is co-winner of the Best Paper award at WWW6 (for his work on duplicate elimination of web pages) and at WWW9 (for his work on mapping the web). He has authored more than eighty papers and was awarded twenty-five patents. He is an ACM Fellow, an IEEE fellow, and past chair of the IEEE Technical Committee on Mathematical Foundations of Computing.

Presentation title: The Evolving Computational Advertising Landscape

Presentation abstract: Computational advertising (CA) is being established as a new scientific sub-discipline, at the intersection of large scale search and text analysis, information retrieval, statistical modeling, machine learning, classification, optimization, and microeconomics. The central goal of CA is to find the "best match" between a given user in a given context and a suitable advertisement.

This talk posits that although this goal is likely to be with us for the foreseeable future, the parameters of the problem will change rapidly, driven on one hand by a deeper understanding of users' perspective and behavior, and on the other hand, by emerging technological, social, economic, and legal changes that will impact both the potential advertising contexts and the available advertisements, and thus ultimately the advertising marketplace. To support this argument, we will examine each facet of the CA ecosystem: users, context, and advertisers, identify some emerging trends, and venture some speculative possibilities.

Steven Gary Blank

Affiliation: Stanford University, Graduate School of Engineering and University of California Berkeley, Haas Business School

Bio: Over the last 30 years, Steve has been part of, or co-founded eight Silicon Valley startups. These have run the gamut from semiconductors, video games, personal computers, and supercomputers. (MIPS, Zilog, Rocket Science, SuperMac, Convergent Technologies, Ardent, ESL) Steve's last company was E.piphany, an enterprise software company. Steve is on the board of an on-line marketplace, and IMVU a 3D IM social network. Steve also serves on the California Coastal Commission and is on the board of Audubon National & is Chairman of Audubon California and Peninsula Open Space Trust (POST.) Steve currently teaches entrepreneurship at U.C. Berkeley Haas Business School, the joint Berkeley/Columbia Executive MBA program, and at the Stanford University Graduate School of Engineering. Steve teaches a methodology of starting and managing marketing, sales and business development in high technology startups. Course text at

Presentation title: The Secret History of Silicon Valley

Presentation abstract: The origins of Entrepreneurship in Silicon Valley has many threads; world class universities, regional climate and culture, capital markets, etc.

This talk tells the story of how World War II set the stage for the growth of defense industries in Silicon Valley during the Cold War. It describes the seminal role of Frederick Terman and Stanford in working with the CIA and the National Security Agency during the Cold War to invent the entrepreneurial culture in the valley.

Christopher J. C. Burges

Affiliation: Principal Researcher, Microsoft Research

Bio: Chris Burges started out with a PhD in theoretical physics, and has been conducting a random walk in career space ever since. He was at Bell Labs for 14 years, working on network performance modeling, network routing algorithms, and machine learning: AT&T used an algorithm he designed to route their CCS7 signaling network. He also worked on OCR and neural nets for handwriting and machine print recognition for check and ZIP code reading; he contributed to work that is still used to process millions of checks daily in bank back offices. Chris was then fortunate to have an opportunity to work with Vladimir Vapnik on an ARPA contract to develop Support Vector Machines. After joining Microsoft Research in 2000, Chris has worked on a variety of projects, principally audio fingerprinting, which currently ships as part of Windows Media Player, and also on ranking for information retrieval, which currently ships in Live Search. He currently manages the Text Mining, Search and Navigation group in Microsoft Research.

Presentation title: The Mountain or The Street Lamp: Search, Research, and Research Again

Presentation abstract: How do the fruits of research find their way from the halls of large industrial research labs to the outside world? How can companies maximize the effectiveness of their research efforts, from both the company's and the researcher's perspectives? I will describe some examples gathered from over two decades of experience in industrial research labs. My observations are informal and are offered in the spirit of personal reminiscences of an involved player, rather than through any claim of formal expertise in the management of large research projects.

William I. Chang


Bio: Dr. William Chang has been the Chief Scientist at Baidu since January 2007. Prior to joining Baidu, Dr. Chang served as the CTO of Infoseek and the VP of Strategy of Go Network. He created the first successful web-scale natural language search engine Infoseek and the leading enterprise search engine Ultraseek. Dr. Chang has extensive expertise in search technology, online community building and advertising business models.

Dr. Chang earned an undergraduate degree in mathematics from Harvard and a PhD in computer science from the University of California, Berkeley for his breakthrough work in text search. At the renowned Cold Spring Harbor Laboratory, Dr. Chang mapped a genome and invented a protein sequence search methodology. More recently, he created a contextual advertising product at Sentius Corporation, and founded Affini, Inc., a social network technology company.

Presentation title: Toward Next Generation Search: Business, Product, Science, Infrastructure, and Talent

Presentation abstract: TBD

Doug Cutting

Affiliation: Yahoo!

Bio: Doug Cutting has worked on search-related technologies for over 20 years. This includes five years at Xerox PARC, three years at Apple, four years at Excite and three years at Yahoo!. In 1998 he wrote Lucene, an open-source search library which became an Apache project in 2001. In 2003 he founded Nutch, an open-source web search application, and in 2006 he started Hadoop, an open-source distributed-computing framework, both at Apache. Doug currently works for Yahoo!.

Presentation title: Hadoop: Industrial-Strength Open Source for Data-Intensive Supercomputing

Presentation abstract: TBD

Ronny Kohavi

Affiliation: General Manager, Experimentation Platform, Microsoft

Bio: Ronny Kohavi is the General Manager for Microsoft's Experimentation Platform, a team whose mission is to build a platform that will accelerate software innovation through trustworthy experimentation. Controlled experiments, also called A/B tests, allow evaluating ideas through randomized assignment of users to a Control group or different Treatment groups. The methodology is practically the only scientific method we know to establish causal relationships between ideas and metrics of interest.

Prior to joining Microsoft in 2005, Ronny was the director of data mining and personalization at, where he was responsible for personalization, automation, search engine marketing (SEM), consumer behavior / data mining, site experimentation, and automated e-mail. His teams introduced several features estimated to be worth several hundred million dollars in incremental revenue. Prior to Amazon, Ronny was the Vice President of Business Intelligence at Blue Martini Software, where he led the engineering group responsible for the data collection, analysis, visualization, reporting, and campaign management modules in Blue Martini's applications. Prior to joining Blue Martini, Kohavi managed the MineSet product, Silicon Graphics' award-winning product for data mining and visualization. MineSet was based in part on MLC++, a machine learning library developed at Stanford University.

Ronny received a Ph.D. in Machine Learning from Stanford University and a BA from the Technion, Israel. He was the General Chair for KDD 2004, he co-chaired KDD 99's industrial track with Jim Gray, and he co-chaired the KDD Cup 2000. He was an invited speaker at Emetrics 2007, the SF ACM Data Mining SIG in 2006, Emetrics 2004, KDD 2001's industrial track, the National Academy of Engineering in 2000.

More information about Ronny is available at

Presentation title: Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO

Presentation abstract: The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments or A/B tests (and their generalizations). Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person's Opinion (HiPPO). We provide multiple real-world examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We share key lessons and pitfalls that will help practitioners in running trustworthy controlled experiments.

The talk is based on joint work with Randy Henne and Dan Sommerfield:

Peter Norvig

Affiliation: Director of Research, Google

Bio: Peter Norvig is a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery. At Google Inc he was Director of Search Quality, responsible for the core web search algorithms from 2002-2005, and has been Director of Research from 2005 on.

Previously he was the head of the Computational Sciences Division at NASA Ames Research Center, making him NASA's senior computer scientist. He received the NASA Exceptional Achievement Award in 2001. He has served as an assistant professor at the University of Southern California and a research faculty member at the University of California at Berkeley Computer Science Department, from which he received a Ph.D. in 1986 and the distinguished alumni award in 2006. He has over fifty publications in Computer Science, concentrating on Artificial Intelligence, Natural Language Processing and Software Engineering, including the books Artificial Intelligence: A Modern Approach (the leading textbook in the field), Paradigms of AI Programming: Case Studies in Common Lisp, Verbmobil: A Translation System for Face-to-Face Dialog, and Intelligent Help Systems for UNIX. He is also the author of the Gettysburg Powerpoint Presentation and the world's longest palindromic sentence.

Presentation title: Statistical Learning as the Ultimate Agile Development Tool

Presentation abstract: When faced with the task of bringing high-quality software products to market quickly, many practitioners have turned to agile software development techniques. This talk shows how methods based on statistical learning from very large data sets are the ultimate agile technique.

Daniel E. Rose

Affiliation: Director, Search Relevance,, a subsidiary of

Bio: Daniel E. Rose is the Director of Search Relevance at, which provides the product search engine used on and other retail sites. Previously, he was Director of Search Presentation, Assistance, and Interaction at Yahoo! Inc., where he oversaw several projects for improving user interaction with web search. Dan has been working in the area of search for over fifteen years. At Apple's Advanced Technology Group, he spent several years leading the Information Access Research team, which explored document search, clustering, summarization, and collaborative filtering. His group also developed the core search technology later used in Apple's Sherlock and Spotlight products. Dan has also held key positions at web search pioneer AltaVista and at Xigo, a startup where he architected a search and alert service for realtime financial news. He holds a Ph.D. in Cognitive Science and Computer Science from UC San Diego and a B.A. in Philosophy from Harvard University.

Presentation title: Crowdsourcing for Relevance Evaluation

Presentation abstract: Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. I'll describe a new approach to evaluation based on the crowdsourcing paradigm, in which many online users, drawn from a large community each performs a small evaluation task.

Gold Supporters

Silver Supporters

Bronze Supporters
Video Supporter

Book Exhibits

Local Organization