In the fable "The frog and the ox" by Aesop, a frog tries to appear bigger than it actually is by inflating itself more and more...till it bursts. Well, that's not the end of the story, but the frog's behaviour resembles a bit what Facebook has done with one of its metrics: the average time people stay in front of a video. This metric is a key for prospective ad space buyers, since it is a proxy for the average exposure of Facebook users to ads: the longer the exposure, the higher the value of the advertising space. The metric is particularly important for a company like Facebook, whose product is basically you and your time. The news broke after the advertising company Publicis warned its customers about the alleged miscalculation by Facebook, and was reported by the Wall Street Journal. Facebook itself admitted the "discrepancy" by posting a clarification note on the computation method. Continue reading
Quite recently, the European Commission has approved, under the EU Merger Regulation, a proposed telecommunications joint venture between Hutchison and VimpelCom in Italy (you can read the press release here), namely among their Italian subsidiaries H3G (better known through the 3 brand) and Wind. While Wind is currently the third largest operator, active both in the fixed and in the mobile sector, H3G is the fourth mobile operator by size. The merger would possibly create a company with the largest share of the mobile telecommunications market in Italy, and reduce the benefits for customers deriving from a wider competition among operators. However, no details of any quantitative analysis appear to have been released by the EU. Here we try to gain some deeper insight through the use of the well known tool represented by the Herfindahl-Hirschman Index. Continue reading
When prices vary in a programmed way on the basis of time, demand, or resource scarcity, this is typically done in the form of a rebate or discount factor: the high price is taken as a reference, and the price is formulated as some percentage down that high price, so that the customer experiences the pleasant feeling of paying something less than otherwise. This is exactly the opposite of what happens with surge pricing, where the bottom price is the reference, and prices are advertised as a multiple of the reference value. And surge pricing is the method chosen at Uber, the transportation network company. Prof. Garrett van Ryzin, currently on leave from Columbia University to serve as the Head of Dynamic Pricing Research at Uber Technologies, gave some insight into Uber practices at his talk on Data and Surge Pricing at Uber , delivered at the Imperial College Data Science Institute last February. As he pointed out, in Uber's view such an unorthodox pricing algorithm, which is driven by the imbalance between strong demand and weak supply, is considered indeed as a way to shape demand and correct the imbalance: customers will refrain to go for a taxi, while Uber drivers will flock towards surge pricing areas. Some interesting snapshots of surge pricing in action can be seen in the paper by Hall, Kendrick, and Nosko. The method seems to work, though it has spurred a lot of criticisms among angry customers, as can be read in this WSJ article.
All of you experience the daily strain of having to remove tons of e-mail spam messages, talking of vacant successions, ousted rulers, remote relatives you didn't know of, etc. The one advise is NOT to answer any of these messages, but you may be curious to know what would happen if you did (of course, without getting it through to the end). Well, someone has done it, so that we may quench our curiosity without risking anything. Take a look at this hilarious TED video.
A colleague of mine, Luigi Laura, reports a recent example of statistical ignorance. In the flurry of newspaper articles about the VW scandal, one of those, published on an Italian newspaper (and a very important one, poor us...), reported the statistics on the level of CO2 emissions as measured in laboratory tests performed by Transport & environment, a company whose mission is "to promote, at EU and global level, a transport policy based on the principles of sustainable development." Continue reading
Though my report is a bit late, I take the chance to highlight and praise the initiative taken by two colleagues, Ulf Brefeld (TU Darmstadt) and Thorsten Strufe (TU Dresden), who have organised the Workshop on Privacy and Inference (PRINF), held in Dresden in September. Collecting personal data and linking them to obtain a more complete profile of people is becoming a widespread activity for many respected companies, but also for fraudsters. Nowadays, both criminal activities and privacy protection countermeasures rely heavily on statistical inference. The recognition of the important role played by this branch of research is at the basis of the organization of PRINF. I presented a paper (co-authored with Giuseppe D'Acquisto) on the use of option contracts in a market where some suppliers wish to stay in the market and sell goods, while protecting their personal data (e.g., the level of their stock) at the same time. I hope that privacy-related workshops and conferences will devote more and more space to the use of statistical inference tools in the future.
In my classes I often refer to people who are faster at writing code than thinking as compulsive code writers. Therefore, I strongly agree with the opinions expressed in this article by Jeff Atwood appeared on the New York Daily News. Getting stronger at maths, critical reasoning, writing, and engineering estimates is much better than being a faster code writer...
In the past, a few studies addressed the problem of understanding the impact of a data breach suffered by a company on the company's value (see, e.g. the paper "The effect of internet security breach announcements on market value" by Cavusoglu et alii). The general conclusions were that the company value showed a decline (as measured by the market price of the company's shares) after the announcement of a security breach. The issue has been investigated further by Hinz et alii Continue reading
Some days ago, I was in Rennes for two days, serving on a PhD committee at Télécom Bretagne. As you may know, Télécom Bretagne is the name of the engineering school that was formerly known as the Ecole Nationale Supérieure des Télécommunications (ENST-Bretagne), which can boast a long and prestigious history in telecommunications education in France. We were there to examine the PhD work of Vladimir Fux. In addition to me Continue reading
Data breaches are always a concern, so that data on their diffusion and ensuing damages are always welcome. I've just read a recently published paper on the subject: "Towards a Model for Data Breaches: An Universal Problem for the Public". It reports a sample of the data on data breaches gathered by the Privacy Rights Clearinghouse, a nonprofit corporation, whose mission (ion their words) is to engage, educate and empower individuals to protect their privacy. While the analysis of data coming from this organization is rather new in the literature, on the overall I found the paper quite disappointing. First of all, Though the data from PRC stretch all over 2013, the authors of the paper limit themselves to a sample pertaining to the 2005-2010 (i.e., over 4 years old). In addition, though the title claims that they provide a model for data breaches, they actually limit themselves to classify the data breaches by industry and report the resulting time series, without trying to explain the not-so-monotonic behaviour. I hope the authors will be able to extract much more information from those data, in a way similar to the Verizon report on their data. My sketchnote summing up the paper can be downloaded here.
Holtfreter, Robert E., and Adrian Harrington. "Towards a Model for Data Breaches: An Universal Problem for the Public." International Journal of Public Information Systems 10.1 (2014).