Yearly Archives: 2014

Economics of platforms: Implications for cyberinfrastructure

I recommend an interesting paper by Glen Weyl and Alexander White, “Let the Best ‘One’ Win: Policy Lessons from the New Economics of Platforms.”  The abstract summarizes the message:

The primary policy problem in platform markets is usually considered to be excessive lock-in to a potentially inefficient dominant platform. We argue that, once one accounts for sophisticated platform pricing strategies, such concerns are overblown. Instead the greater market failure is excessive fragmentation and insufficient participation. These problems, in turn, call for a very different policy response: aiding winners in taking all,

Read more ›




Parallel algorithms for the spectral transform method

Screenshot 2014-02-27 23.25.21

I sometimes wonder about the relationship between my affection for a paper that I have written, the effort required to write it, and how much recognition the paper obtained. I think that the papers that took the most time are still the ones I like the best, even if sometimes they did not generate so much buzz in the community.

One paper that took a lot of time and of which I remain proud is Parallel Algorithms for the Spectral Transform Method, which Pat Worley and I published in the SIAM Journal of Scientific Computing in 1997.

Read more ›




Why not store personal health information in the cloud?

At the recent American Association for the Advancement of Science (AAAS) meeting in Chicago, my colleague Bob Grossman organized what was by all accounts a fascinating session on How Big Data Supports Biomedical Discovery. It being a Saturday, I had family duties. But I read with interest a synopsis of remarks made by speaker Lincoln Stein: “Legal and ethical issues with using commercial cloud vendors for cancer data. If Comcast buys Amazon, who owns data?” (As you can tell by the abbreviated style, this remark was communicated by Twitter.)

Lincoln published in 2010 The Case for Moving Genome Informatics to the Cloud,

Read more ›




Thoughts on dark software

I wrote a two-page white paper for a DOE workshop on software productivity for extreme-scale science. In this paper, I coin a new term (at least I think it is new!): dark software. I explain this concept below:

Scientific discovery is the result not of individual simulations but of complex end-to-end research processes. These processes frequently involve, for example, the ingest and analysis of simulation, experimental, and observational data; the invocation of simulations within larger design optimization and uncertainty quantification activities; validation through comparison of experimental and simulation data;

Read more ›




Micrometrics as a solution to software invisibility

Software is central to modern science. But software is also largely invisible, and in consequence, is undervalued, poorly understood, and subject to what appear to be underinvestment and policy decisions that are not driven by data. We must do better if we want science to address the challenges faced by humankind in a time of massive scientific opportunity but limited resources. I argue here that micrometrics can help us do better.

Read more ›




The History of the Grid: Comments invited

GUSTOTwo years ago, Carl Kesselman and I published a rather lengthy paper that purports to recount the “history of the grid.” (I. Foster, C. Kesselman, The History of the Grid (PDF), in Cloud Computing and Big DataIOS PressAmsterdam , 2013; 37 pages, 176 references).

We believe that this paper includes useful material. We also know that it can be much improved, and to that end we plan a second edition.

Read more ›