var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
var pageTracker = _gat._getTracker("UA-4764946-1");
pageTracker._initData();
pageTracker._trackPageview();
Alex SmolaAdventures in Data LandTumblr (3.0; @smolix)http://blog.smola.org/Distributing Data in a Parameterserver<p>One of the key features of a parameter server is that it, well, serves parameters. In particular, it serves more parameters than a single machine can typically hold and provides more bandwidth than what a single machine offers. </p>
<p><img alt="image" src="https://31.media.tumblr.com/1e67c6f5a665d5b9a8b2b13f5a7fc9fc/tumblr_inline_n5utdsd7mH1qasu5b.png"/></p>
<p>A sensible strategy to increase both aspects is to arrange data in the form of a bipartite graph with clients on one side and the server machines on the other. This way bandwidth and storage increase linearly with the number of machines involved. This is well understood. For instance, distributed (key,value) stores such as <a href="http://memcached.org">memcached</a> or <a href="http://basho.com/riak/">Basho Riak</a> use it. It dates back to the ideas put forward e.g. in the STOC 1997 paper by <a href="http://people.csail.mit.edu/karger/">David Karger</a> et al. on <a href="http://dl.acm.org/citation.cfm?id=258660">Consistent Hashing and Random Trees</a>. </p>
<p>A key problem is that we can obviously not store a mapping table from the keys to the machines. This would require a database that is of the same size as the set of keys and that would need to be maintained and updated on each client. One way around this is to use the argmin hash mapping. That is, given a machine pool \(M\), we assign a given (key,value) pair to the machine that has the smallest hash, i.e.</p>
<p>$$m(k, M) = \mathrm{argmin}_{m \in M} h(m,k)$$</p>
<p>The advantage of this scheme is that it allows for really good load balancing and repair. First off, the load is almost uniformly distributed, short of a small number of heavy hitters. Secondly, if a machine is removed or added to the machine pool, rebalancing affects all other machines uniformly. To see this, notice that the choice of machine with the smallest and second-smallest hash value is uniform. </p>
<p>Unfortunately, this is a stupid way of distributing (key,value) pairs for machine learning. And this is what we did in our <a href="http://www.vldbarc.org/pvldb/vldb2010/papers/R63.pdf">2010 VLDB</a> and <a href="http://dx.doi.org/10.1145/2124295.2124312">2012 WSDM</a> papers. To our excuse, we didn’t know any better. And others copied that approach … after all, how you can you improve on such nice rebalancing aspects.</p>
<p>This begs the question why it is a bad idea. It all comes down to the issue of synchronization. Basically, whenever a client attempts to synchronize its keys, it needs to traverse the list of the keys it owns and communicate with the appropriate servers. In the above scheme, it means that we need to communicate to a new random server for each key. This is amazingly costly. Probably the best comparison would be a P2P network where each byte is owned by a different machine. Downloads would take forever.</p>
<p>We ‘fixed’ this problem by cleverly reordering the access and then performing a few other steps of randomization. There’s even a nice load balancing lemma in the <a href="http://dx.doi.org/10.1145/2124295.2124312">2012 WSDM</a> paper. However, a much better solution is to prevent the problem from happening and to borrow from key distribution algorithms such as <a href="http://en.wikipedia.org/wiki/Chord_(peer-to-peer)">Chord</a>. In it, servers are inserted into a ring via a hash function. So are keys. This means that each server now owns a <strong>contiguous segment of keys</strong>. As a result, we can easily determine which keys go to which server, simply by knowing where in the ring the server sits.</p>
<p><img alt="image" src="https://31.media.tumblr.com/2741f4e10d3e42bd0312de6496042368/tumblr_inline_n5uuxoVsIy1qasu5b.png"/></p>
<p>In the picture above, keys are represented by little red stars. They are randomly assigned using a hash function via \(h(k)\) to the segments ‘owned’ by servers \(s\) that are inserted in the same way, i.e. via \(h(s)\). In the picture above, each server ‘owns’ the segment to its left. Also have a look at the <a href="http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf">Amazon Dynamo paper</a> for a related description.</p>
<p>Obviously, such a load-balancing isn’t quite as ideal as the argmin hash. For instance, if a machine fails, the next machine inherits the entire segment. However, by inserting each server \(\log n\) times we can ensure that a good load balance is achieved and also that when machines are removed, there are several other machines that pick up the work. Moreover, it is now also very easy to replicate things (more on this later). If you’re curious on how to do this, have a look at <a href="http://research.microsoft.com/en-us/um/people/amar/">Amar Phanishayee</a>'s excellent <a href="http://reports-archive.adm.cs.cmu.edu/anon/2012/CMU-CS-12-139.pdf">thesis</a>. In a nutshell, the machines to the left hold the replicas. More details in the next post.</p>http://blog.smola.org/post/86282060381http://blog.smola.org/post/86282060381Mon, 19 May 2014 21:31:00 -0700100 Terabytes, 5 Billion Documents, 10 Billion Parameters, 1 Billion Inserts/s<p>We’ve been busy building the next generation of a Parameter Server and it’s finally ready. It’s quite different from our previous designs, the main improvements being fault tolerance and self repair, a much improved network protocol, flexible consistency models, and a much more general interface.</p>
<p>In the next few posts I’ll explain the engineering decisions that went into this system which is capable of solving problems as diverse as very high throughput sketching, topic models, and optimization. And yes, it will be open source so you can build your own algorithms on top of it.</p>http://blog.smola.org/post/85462143726http://blog.smola.org/post/85462143726Sun, 11 May 2014 14:56:25 -0700Machine Learning Summer School 2014<p><img src="https://31.media.tumblr.com/7cd756366892b5441204aef195fdc8e2/tumblr_inline_n45d7vA7lb1qasu5b.png"/></p>
<p><a href="http://www.cs.cmu.edu/~zkolter/">Zico Kolter</a> and I proudly announce the <a href="http://www.mlss2014.com">2014 Machine Learning Summer School</a> in Pittsburgh. It will be held at <a href="http://www.cmu.edu">Carnegie Mellon University</a> in July 7-18, 2014. Our focus is on scalable data analysis and its applications, largely in the internet domain. So, if this is you PhD topic or if you’re planning on a startup in this area, come along. </p>
<p><a href="http://mlss2014.com/registration.html">Registration</a> is open now. We will have scholarships for talented students to waive housing and attendance fees. Please submit your CVs on the site. See you in Pittsburgh.</p>
<p><a href="http://www.mlss2014.com">www.mlss2014.com</a></p>http://blog.smola.org/post/82938400964http://blog.smola.org/post/82938400964Wed, 16 Apr 2014 16:31:07 -0700Beware the bandwidth gap - speeding up optimization<p>Disks are slow and RAM is fast. Everyone knows that. But many optimization algorithms don’t take advantage of this. More to the point, disks currently stream at about 100-200 MB/s, solid state drives stream at over 500 MB/s with 1000x lower latency than disks, and main memory reigns supreme at about <a href="http://www.techspot.com/review/679-intel-haswell-core-i7-4770k/page7.html">10-100 GB/s bandwidth</a> (depending on how many memory banks you have). This means that it is 100 times more expensive to retrieve instances from disk rather than recycling them once they’re already in memory. CPU caches are faster yet with 100-1000 GB/s of bandwidth. Everyone knows this. If not, read <a href="http://static.googleusercontent.com/media/research.google.com/en/us/people/jeff/stanford-295-talk.pdf" title="Jeff Dean's slides" target="_blank">Jeff Dean’s slides</a>. Page 13 is pure gold.</p>
<p>Ok, so what does this mean for machine learning? If you can keep things in memory, you can do things way faster. This is the main idea behind <a href="http://spark.apache.org/">Spark</a>. It’s a wonderful alternative to Hadoop. In other words, if your data fits into memory, you’re safe and you can process data way faster. A lot of datasets that are considered big in <em>academia</em> fit this bill. But what about <em>real</em> big data? Essentially you have two options - have the systems designer do the hard work or change your algorithm. This post is about the latter. And yes, there’s a good case to be made about who should do the work: the machine learners or the folks designing the computational infrastructure (I think it’s both).</p>
<p>So here’s the problem: Many online algorithms load data from disk, stream it through memory as efficiently as possible and <em>discard</em> it after seeing it once, only to pick it up later for another pass through the data. That is, these algorithms are <strong>disk bound</strong> rather than CPU bound. Several solvers try to address this by making the disk representation more efficient, e.g. <a href="http://www.csie.ntu.edu.tw/~cjlin/liblinear/">Liblinear</a> or <a href="http://hunch.net/~vw/">VowpalWabbit</a>, both of which user their own internal representation for efficiency. While this still makes for quite efficient code that can process up to 3TB of data per hour in any given pass, main memory is still much faster. This has led to the misconception that many machine learning algorithms are disk bound. But, they aren’t …</p>
<p>What if we could re-use data that’s in memory? For instance, use a ringbuffer where the disk writes into it (much more slowly) and the CPU reads from it (100 times more rapidly). The problem is what to do with an observation that we’ve already processed. A naive strategy would be to pretend that it is a new instance, i.e. we could simply update on it more than once. But this is very messy since we need to keep track of how many times we’ve seen the instance before, and it creates nonstationarity in the training set. </p>
<p>A much cleaner strategy is to switch to dual variables, similar to the updates in the Dualon of <a href="http://ttic.uchicago.edu/~shai/papers/ShalevSi06_dualon.pdf">Shalev-Shwartz and Singer</a>. This is what Shin Matsushima did in our <a href="http://www.r.dl.itc.u-tokyo.ac.jp/~masin/Appendix.pdf">dual cached loops</a> paper. Have a look at <a href="http://www.r.dl.itc.u-tokyo.ac.jp/~masin/streamsvm.html">StreamSVM</a> here. Essentially, it keeps data in memory in a ringbuffer and updates the dual variables. This way, we’re guaranteed to make progress at each step, even if we’re revisiting the same observation more than once. To see what happens have a look at the graph below:</p>
<p><img src="https://31.media.tumblr.com/cd836818f63461ae9ca5c6a45db40de4/tumblr_inline_n45cuvT4Me1qasu5b.png"/></p>
<p>It’s just as fast as LibLinear provided that it’s all in memory. Algorithmically, what happens in the SVM case is that one updates the Lagrange multipliers \(\alpha_i\), while simultaneously keeping an estimate of the parameter vector \(w\) available.</p>
<p>That said, this strategy is more general: reuse data several times for optimization while it is in memory. If possible, perform successive updates by changing variables of an optimization that is well-defined regardless of the order in which (and how frequently) data is seen.</p>http://blog.smola.org/post/82937674537http://blog.smola.org/post/82937674537Wed, 16 Apr 2014 16:22:58 -0700The Weisfeiler-Lehman algorithm and estimation on graphs<p>Imagine you have two graphs \(G\) and \(G’\) and you’d like to check how similar they are. If all vertices have unique attributes this is quite easy:</p>
<p>FOR ALL vertices \(v \in G \cup G’\) DO</p>
<ul><li>Check that \(v \in G\) and that \(v \in G’\)</li>
<li>Check that the neighbors of v are the same in \(G\) and \(G’\)</li>
</ul><p>This algorithm can be carried out in linear time in the size of the graph, alas many graphs do not have vertex attributes, let alone unique vertex attributes. In fact, graph isomorphism, i.e. the task of checking whether two graphs are identical, is a hard problem (it is still an open research question how hard it really is). In this case the above algorithm cannot be used since we have no idea which vertices we should match up.</p>
<p>The Weisfeiler-Lehman algorithm is a mechanism for assigning fairly unique attributes efficiently. Note that it isn’t guaranteed to work, as discussed in <a href="http://arxiv.org/abs/1101.5211" title="B. L. Douglas">this paper</a> by Douglas - this would solve the graph isomorphism problem after all. The idea is to assign fingerprints to vertices and their neighborhoods repeatedly. We assume that vertices have an attribute to begin with. If they don’t then simply assign all of them the attribute 1. Each iteration proceeds as follows:</p>
<p>FOR ALL vertices \(v \in G\) DO</p>
<ul><li>Compute a hash of \((a_v, a_{v_1}, \ldots, a_{v_n})\) where \(a_{v_i}\) are the attributes of the neighbors of vertex \(v\).</li>
<li>Use the hash as vertex attribute for v in the next iteration.</li>
</ul><p>The algorithm terminates when this iteration has converged in terms of unique assignments of hashes to vertices. </p>
<p>Note that it is <em>not</em> guaranteed to work for all graphs. In particular, it fails for graphs with a high degree of symmetry, e.g. chains, complete graphs, tori and stars. However, whenever it converges to a unique vertex attribute assignment it provides a certificate for graph isomorphism. Moreover, the sets of vertex attributes can be used to show that two graphs are not isomorphic (it suffices to verify that the sets differ at any stage).</p>
<p><a href="http://www.mpi-inf.mpg.de/~mehlhorn/ftp/genWLpaper.pdf" title="WL Paper">Shervashidze et al. 2012</a> use this idea to define a similarity measure between graphs. Basically the idea is that graphs are most similar if many of their vertex identifiers match since this implies that the associated subgraphs match. Formally they compute a kernel using</p>
<p>$$k(G,G’) = \sum_{i=1}^d \lambda_d \sum_{v \in V} \sum_{v’ \in V’} \delta(a(v,i), a(v’,i))$$</p>
<p>Here \(a(v,i)\) denote the vertex attribute of \(v\) after WL iteration \(i\). Morevoer, \(\lambda_i\) are nonnegative coefficients that weigh how much the similarity at level \(i\) matters. Rather than a brute-force computation of the above test for equality we can sort vertex attribute sets. Note that vertices that have different attributes at any given iteration will never have the same attribute thereafter. This means that we can compare the two sets at all depths at at most \(O(d \cdot (|V| + |V’|))\) cost. </p>
<p>A similar trick is possible if we want to regress between vertices on the same graph since we can use the set of attributes that a vertex obtains during the iterations as features. Finally, we can make our life even easier if we don’t compute kernels at all and use a linear classifier on the vertex attributes directly. </p>http://blog.smola.org/post/33412570425http://blog.smola.org/post/33412570425Thu, 11 Oct 2012 21:14:21 -0700In defense of keeping data private<p class="commenter"><span class="comment-body" data-li-comment-text="About public access of large scale benchmark data
There are two issues at stake here: scientific progress and common access. These two are not identical. Reproducibility is often confused with common access. To make these things a bit more clear, here's an example where it's more obvious:
CERN is a monster machine. There's only one of its kind in the world. There are limited resources and it's impossible for any arbitrary researcher to reproduce their experiments, simply because of the average physicist being short of the tens of billions of Dollars that it took to build it. Access to the accelerator is also limited. It requires qualification and resource planning. So, even if we think this is open, it isn't really as open as it looks. And yes, working at CERN gives you an unfair advantage over all the researchers who don't.
Likewise take medical research. Patient records are covered by HIPAA privacy constraints and there is absolutely no way for such records to be publicly released. The participants sign an entire chain of documents that tie them to not releasing such data publicly. In other words, common access is impossible. Reproducibility would require that someone, who wants to test a contentious result, needs to sign corresponding privacy documents before accessing the data. And yes, working with the 'right' hospitals gives you an unfair advantage over researchers who didn't work building this relationship.
Lastly, user data on the internet. Users have every right for their comments, content, images, mails, etc. to be treated with the utmost respect and to be published only when it is in their interest and with their permission to do so. I believe that there is a material difference between data being made available for analytics purposes in a personalization system and data being made available 'in the raw' for any researcher to play with. The latter allows for individuals to inspect particular records and learn that Alice mailed Bob a love letter. Something that would make Charlie very upset if he found out. Hence common access is a non-starter.
There are very clear financial penalties for releasing private data - users would leave the service. Moreover, it would give a competitor an advantage over the releasing party. Since the data is largely collected by private parties at their expense it is not possible.
As for reproducibility - this is an issue. But provided that in case of a contentious result it is possible for a trusted researcher to check them, possibly after signing an NDA, this can be addressed. And yes, working for one of these companies gives you an unfair advantage.
In summary, while desirable, I strongly disagree with the mandatory publications policy. Yes, every effort should be made personally by researchers to see whether some data is releasable. But to mandate it would essentially do two things - it will make industrial research even more secretive than it already is (and that's a terrible thing). And secondly, it will make academic research less relevant for real problems (I've seen my fair share and am guilty of my fair share of such papers).">This is going to be contentious. And it somewhat goes against a lot of things that researchers hold holy. And it goes against my plan of keeping philosophy out of this blog. But it must be said since remaining silent has the potential of damaging science with proposals that sound good and are bad.</span></p>
<p class="commenter"><span class="comment-body" data-li-comment-text="About public access of large scale benchmark data
There are two issues at stake here: scientific progress and common access. These two are not identical. Reproducibility is often confused with common access. To make these things a bit more clear, here's an example where it's more obvious:
CERN is a monster machine. There's only one of its kind in the world. There are limited resources and it's impossible for any arbitrary researcher to reproduce their experiments, simply because of the average physicist being short of the tens of billions of Dollars that it took to build it. Access to the accelerator is also limited. It requires qualification and resource planning. So, even if we think this is open, it isn't really as open as it looks. And yes, working at CERN gives you an unfair advantage over all the researchers who don't.
Likewise take medical research. Patient records are covered by HIPAA privacy constraints and there is absolutely no way for such records to be publicly released. The participants sign an entire chain of documents that tie them to not releasing such data publicly. In other words, common access is impossible. Reproducibility would require that someone, who wants to test a contentious result, needs to sign corresponding privacy documents before accessing the data. And yes, working with the 'right' hospitals gives you an unfair advantage over researchers who didn't work building this relationship.
Lastly, user data on the internet. Users have every right for their comments, content, images, mails, etc. to be treated with the utmost respect and to be published only when it is in their interest and with their permission to do so. I believe that there is a material difference between data being made available for analytics purposes in a personalization system and data being made available 'in the raw' for any researcher to play with. The latter allows for individuals to inspect particular records and learn that Alice mailed Bob a love letter. Something that would make Charlie very upset if he found out. Hence common access is a non-starter.
There are very clear financial penalties for releasing private data - users would leave the service. Moreover, it would give a competitor an advantage over the releasing party. Since the data is largely collected by private parties at their expense it is not possible.
As for reproducibility - this is an issue. But provided that in case of a contentious result it is possible for a trusted researcher to check them, possibly after signing an NDA, this can be addressed. And yes, working for one of these companies gives you an unfair advantage.
In summary, while desirable, I strongly disagree with the mandatory publications policy. Yes, every effort should be made personally by researchers to see whether some data is releasable. But to mandate it would essentially do two things - it will make industrial research even more secretive than it already is (and that's a terrible thing). And secondly, it will make academic research less relevant for real problems (I've seen my fair share and am guilty of my fair share of such papers)."><br/></span></p>
<p class="commenter">The proposal is that certain conferences make it mandatory to publish datasets that were used for the experiments. This is a very bad idea and two things are getting confused here: <span>scientific progress and common access. These two are not identical. Reproducibility is often confused with common access. To make these things a bit more clear, here’s an example where it’s more obvious: </span></p>
<p class="commenter"><span><br/></span></p>
<p class="commenter"><span>CERN is a monster machine. There’s only one of its kind in the world. There are limited resources and it’s impossible for any arbitrary researcher to reproduce their experiments, simply because of the average physicist being short of the tens of billions of Dollars that it took to build it. Access to the accelerator is also limited. It requires qualification and resource planning. So, even if we think this is open, it isn’t really as open as it looks. And yes, working at CERN gives you an unfair advantage over all the researchers who don’t. </span></p>
<p class="commenter"><span><br/></span></p>
<p class="commenter"><span>Likewise take medical research. Patient records are covered by HIPAA privacy constraints and there is absolutely no way for such records to be publicly released. The participants sign an entire chain of documents that tie them to not releasing such data publicly. In other words, common access is impossible. Reproducibility would require that someone, who wants to test a contentious result, needs to sign corresponding privacy documents before accessing the data. And yes, working with the ‘right’ hospitals gives you an unfair advantage over researchers who didn’t work building this relationship. </span></p>
<p class="commenter"><span><br/></span></p>
<p class="commenter"><span>Lastly, user data on the internet. Users have every right for their comments, content, images, mails, etc. to be treated with the utmost respect and to be published only when it is in their interest and with their permission to do so. I believe that there is a material difference between data being made available for analytics purposes in a personalization system and data being made available ‘in the raw’ for any researcher to play with. The latter allows for individuals to inspect particular records and learn that Alice mailed Bob a love letter. Something that would make Charlie very upset if he found out. Hence common access is a non-starter. </span></p>
<p class="commenter"><span><br/></span></p>
<p class="commenter"><span>There are very clear financial penalties for releasing private data - users would leave the service. Moreover, it would give a competitor an advantage over the releasing party. Since the data is largely collected by private parties at their expense it is not possible. </span></p>
<p class="commenter"><span class="comment-body" data-li-comment-text="About public access of large scale benchmark data
There are two issues at stake here: scientific progress and common access. These two are not identical. Reproducibility is often confused with common access. To make these things a bit more clear, here's an example where it's more obvious:
CERN is a monster machine. There's only one of its kind in the world. There are limited resources and it's impossible for any arbitrary researcher to reproduce their experiments, simply because of the average physicist being short of the tens of billions of Dollars that it took to build it. Access to the accelerator is also limited. It requires qualification and resource planning. So, even if we think this is open, it isn't really as open as it looks. And yes, working at CERN gives you an unfair advantage over all the researchers who don't.
Likewise take medical research. Patient records are covered by HIPAA privacy constraints and there is absolutely no way for such records to be publicly released. The participants sign an entire chain of documents that tie them to not releasing such data publicly. In other words, common access is impossible. Reproducibility would require that someone, who wants to test a contentious result, needs to sign corresponding privacy documents before accessing the data. And yes, working with the 'right' hospitals gives you an unfair advantage over researchers who didn't work building this relationship.
Lastly, user data on the internet. Users have every right for their comments, content, images, mails, etc. to be treated with the utmost respect and to be published only when it is in their interest and with their permission to do so. I believe that there is a material difference between data being made available for analytics purposes in a personalization system and data being made available 'in the raw' for any researcher to play with. The latter allows for individuals to inspect particular records and learn that Alice mailed Bob a love letter. Something that would make Charlie very upset if he found out. Hence common access is a non-starter.
There are very clear financial penalties for releasing private data - users would leave the service. Moreover, it would give a competitor an advantage over the releasing party. Since the data is largely collected by private parties at their expense it is not possible.
As for reproducibility - this is an issue. But provided that in case of a contentious result it is possible for a trusted researcher to check them, possibly after signing an NDA, this can be addressed. And yes, working for one of these companies gives you an unfair advantage.
In summary, while desirable, I strongly disagree with the mandatory publications policy. Yes, every effort should be made personally by researchers to see whether some data is releasable. But to mandate it would essentially do two things - it will make industrial research even more secretive than it already is (and that's a terrible thing). And secondly, it will make academic research less relevant for real problems (I've seen my fair share and am guilty of my fair share of such papers).">As for reproducibility - this is an issue. But provided that in case of a contentious result it is possible for a trusted researcher to check them, possibly after signing an NDA, this can be addressed. And yes, working for one of these companies gives you an unfair advantage. </span></p>
<p class="commenter"><span class="comment-body" data-li-comment-text="About public access of large scale benchmark data
There are two issues at stake here: scientific progress and common access. These two are not identical. Reproducibility is often confused with common access. To make these things a bit more clear, here's an example where it's more obvious:
CERN is a monster machine. There's only one of its kind in the world. There are limited resources and it's impossible for any arbitrary researcher to reproduce their experiments, simply because of the average physicist being short of the tens of billions of Dollars that it took to build it. Access to the accelerator is also limited. It requires qualification and resource planning. So, even if we think this is open, it isn't really as open as it looks. And yes, working at CERN gives you an unfair advantage over all the researchers who don't.
Likewise take medical research. Patient records are covered by HIPAA privacy constraints and there is absolutely no way for such records to be publicly released. The participants sign an entire chain of documents that tie them to not releasing such data publicly. In other words, common access is impossible. Reproducibility would require that someone, who wants to test a contentious result, needs to sign corresponding privacy documents before accessing the data. And yes, working with the 'right' hospitals gives you an unfair advantage over researchers who didn't work building this relationship.
Lastly, user data on the internet. Users have every right for their comments, content, images, mails, etc. to be treated with the utmost respect and to be published only when it is in their interest and with their permission to do so. I believe that there is a material difference between data being made available for analytics purposes in a personalization system and data being made available 'in the raw' for any researcher to play with. The latter allows for individuals to inspect particular records and learn that Alice mailed Bob a love letter. Something that would make Charlie very upset if he found out. Hence common access is a non-starter.
There are very clear financial penalties for releasing private data - users would leave the service. Moreover, it would give a competitor an advantage over the releasing party. Since the data is largely collected by private parties at their expense it is not possible.
As for reproducibility - this is an issue. But provided that in case of a contentious result it is possible for a trusted researcher to check them, possibly after signing an NDA, this can be addressed. And yes, working for one of these companies gives you an unfair advantage.
In summary, while desirable, I strongly disagree with the mandatory publications policy. Yes, every effort should be made personally by researchers to see whether some data is releasable. But to mandate it would essentially do two things - it will make industrial research even more secretive than it already is (and that's a terrible thing). And secondly, it will make academic research less relevant for real problems (I've seen my fair share and am guilty of my fair share of such papers)."><br/></span></p>
<p class="commenter"><span class="comment-body" data-li-comment-text="About public access of large scale benchmark data
There are two issues at stake here: scientific progress and common access. These two are not identical. Reproducibility is often confused with common access. To make these things a bit more clear, here's an example where it's more obvious:
CERN is a monster machine. There's only one of its kind in the world. There are limited resources and it's impossible for any arbitrary researcher to reproduce their experiments, simply because of the average physicist being short of the tens of billions of Dollars that it took to build it. Access to the accelerator is also limited. It requires qualification and resource planning. So, even if we think this is open, it isn't really as open as it looks. And yes, working at CERN gives you an unfair advantage over all the researchers who don't.
Likewise take medical research. Patient records are covered by HIPAA privacy constraints and there is absolutely no way for such records to be publicly released. The participants sign an entire chain of documents that tie them to not releasing such data publicly. In other words, common access is impossible. Reproducibility would require that someone, who wants to test a contentious result, needs to sign corresponding privacy documents before accessing the data. And yes, working with the 'right' hospitals gives you an unfair advantage over researchers who didn't work building this relationship.
Lastly, user data on the internet. Users have every right for their comments, content, images, mails, etc. to be treated with the utmost respect and to be published only when it is in their interest and with their permission to do so. I believe that there is a material difference between data being made available for analytics purposes in a personalization system and data being made available 'in the raw' for any researcher to play with. The latter allows for individuals to inspect particular records and learn that Alice mailed Bob a love letter. Something that would make Charlie very upset if he found out. Hence common access is a non-starter.
There are very clear financial penalties for releasing private data - users would leave the service. Moreover, it would give a competitor an advantage over the releasing party. Since the data is largely collected by private parties at their expense it is not possible.
As for reproducibility - this is an issue. But provided that in case of a contentious result it is possible for a trusted researcher to check them, possibly after signing an NDA, this can be addressed. And yes, working for one of these companies gives you an unfair advantage.
In summary, while desirable, I strongly disagree with the mandatory publications policy. Yes, every effort should be made personally by researchers to see whether some data is releasable. But to mandate it would essentially do two things - it will make industrial research even more secretive than it already is (and that's a terrible thing). And secondly, it will make academic research less relevant for real problems (I've seen my fair share and am guilty of my fair share of such papers).">In summary, while desirable, I strongly disagree with a mandatory publications policy. Yes, every effort should be made personally by researchers to see whether some data is releasable. And for publicly funded research this may well be the right thing to do. But to mandate it for industry would essentially do two things - it will make industrial research even more secretive than it already is (and that’s a terrible thing). And secondly, it will make academic research less relevant for real problems (I’ve seen my fair share and am guilty of my fair share of such papers).</span></p>http://blog.smola.org/post/22786487711http://blog.smola.org/post/22786487711Thu, 10 May 2012 10:39:33 -0700Machine Learning Summer School Purdue Videos<a href="http://www.youtube.com/playlist?list=PL2A65507F7D725EFB">Machine Learning Summer School Purdue Videos</a>: <p>The MLSS 2011 videos from Purdue are now available on YouTube. Enjoy!</p>http://blog.smola.org/post/14345888700http://blog.smola.org/post/14345888700Fri, 16 Dec 2011 23:14:58 -0800Random numbers in constant storage<p>Many algorithms require random number generators to work. For instance, locality sensitive hashing requires one to compute the random projection matrix P in order to compute the hashes z = P x. Likewise, fast eigenvalue solvers in large matrices often rely on a random matrix, e.g. the paper by <a href="http://amath.colorado.edu/faculty/martinss/Pubs/2010_HMT_random_review.pdf" title="SIAM Review">Halko, Martinsson and Tropp, SIAM Review 2011</a>, which assumes that at some point we multiply a matrix M by a matrix P with Gaussian random entries. </p>
<p>The problem with these methods is that if we want to perform this projection operation in many places, we need to distribute the matrix P to several machines. This is undesirable since a) it introduces another stage of synchronization between machines and b) it requires space to store the matrix P in the first place. The latter is often bad since memory access can be much slower than computation, depending on how the memory is being accessed. The prime example here is multiplication with a sparse matrix which would require random memory access. </p>
<p>Instead, we simply recompute the entries by hashing. To motivate things consider the case where the entries of P are all drawn from the uniform distribution U[0,1]. For a hash function h with range [0 .. N] simply set \(U_{ij} = h(i,j)/N\). Since hash functions map (i,j) pairs to uniformly distributed uncorrelated numbers in the range [0 .. N] this essentially amounts to uniformly distributed random numbers that can be recomputed on the fly. </p>
<p>A slightly more involved example is how to draw Gaussian random variables. We may e.g. resort to the <a href="http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform" title="Box Muller transform">Box-Müller transform</a> which shows how to convert two uniformly distributed random numbers into two Gaussians. While being quite wasteful (we use two random numbers rather than one), we simply use two uniform hashes and then compute </p>
<p>$$P_{ij} = \left({-2 \log h(i,j,1)/N}\right)^{\frac{1}{2}} \cos (2 \pi h(i,j,2)/N)$$</p>
<p>Since this is known to generate Gaussian random variables from uniform random variables this will give us Gaussian distributed hashes. Similar tricks work for other random variables. It means that things like Random Kitchen Sinks, Locality Sensitive Hashing, and related projection methods never really need to store the ‘random’ projection coefficients whenever memory is at a premium or whenever it would be too costly to synchronize the random numbers.</p>http://blog.smola.org/post/14345795830http://blog.smola.org/post/14345795830Fri, 16 Dec 2011 23:11:10 -0800Slides for the NIPS 2011 tutorial<p>The slides for the 2011 NIPS tutorial on Graphical Models for the Internet are online. Lots of stuff on parallelization, applications to user modeling, content recommendation, and content analysis here. </p>
<p><a href="http://cevug.ugr.es/tv2/" title="Livestream">Livestream</a> (16:00-18:00 European Standard Time)</p>
<p>Part 1 [<a href="http://alex.smola.org/talks/nips2011/part1.key" title="Part 1">keynote</a>] [<a href="http://alex.smola.org/talks/nips2011/part1.pdf" title="Part 1">pdf</a>], Part 2 [<a href="http://alex.smola.org/talks/nips2011/part2.pptx" title="Part 2">powerpoint</a>] [<a href="http://alex.smola.org/talks/nips2011/part2.pdf" title="Part 2">pdf</a>]</p>http://blog.smola.org/post/14117021513http://blog.smola.org/post/14117021513Mon, 12 Dec 2011 06:27:56 -0800The Neal Kernel and Random Kitchen Sinks<p>So you read a <a title="Learning with Kernels" href="http://www.amazon.com/Learning-Kernels-Regularization-Optimization-Computation/dp/0262194759">book</a> on <a title="Grace Wahba" href="http://www.ec-securehost.com/SIAM/CB59.html">Reproducing Kernel Hilbert Spaces</a> and you’d like to try out this kernel thing. But you’ve got a lot of data and most algorithms will give you an expansion that requires a number of kernel functions linear in the amount of data. Not good if you’ve got millions to billions of instances.</p>
<p>You could try out low rank expansions such as the Nystrom method of <a title="Nystrom" href="http://lapmal.epfl.ch/papers/nystroem.pdf">Seeger and Williams</a>, 2000, the randomized Sparse Greedy Matrix Approximation of <a title="SGMA" href="http://arnetminer.org/dev.do?m=downloadpdf&url=http://arnetminer.org/pdf/PDFFiles2/--d---d-1258203727680/Sparse%20Greedy%20Matrix%20Approximation%20for%20Machine%20Learning1258205169211.pdf">Smola and Schölkopf</a>, 2000 (the Nyström method is a special case where we only randomize by a single term), or the very efficient positive diagonal pivoting trick of <a title="Pivoting" href="http://www.ai.mit.edu/projects/jmlr/papers/volume2/fine01a/fine01a.pdf">Scheinberg and Fine</a>, 2001. Alas, all those methods suffer from a serious problem: at training you need to multiply by the inverse of the reduced covariance matrix, which is \(O(d^2)\) cost for a d dimensional expansion. An example of an online algorithm that suffers from the same problem is this (NIPS award winning) paper of <a title="Csato Opper" href="http://www.ki.tu-berlin.de/fileadmin/fg135/Publikationen/Opper/papers02/CsOp02.pdf">Csato and Opper</a>, 2002. Assuming that we’d like to have d grow with the sample size this is not a very useful strategy. Instead, we want to find a method which has \(O(d)\) cost for d attributes yet shares good regularization properties that can be properly analyzed.</p>
<p>Enter Radford Neal’s seminal paper from 1994 on <a title="GP" href="http://www.cs.toronto.edu/~radford/ftp/pin.pdf">Gaussian Processes</a> (a famous NIPS reject). In it he shows that a Neural Network with an infinite number of nodes and a Gaussian Prior over coefficients converges to a GP. More specifically, we get the kernel</p>
<p>$$k(x,x’) = E_{c}[\phi_c(x) \phi_c(x’)]$$</p>
<p>Here \(\phi_c(x)\) is a function parametrized by c, e.g. the location of a basis function, the degree of a polynomial, or the direction of a Fourier basis function. There is also a discussion regarding RKHS in a paper by <a title="Regularization" href="http://alex.smola.org/papers/1998/SmoSchMul98.pdf">Smola, Schölkof and Müller</a>, 1998 that discusses this phenomenon in regularization networks. These ideas were promptly forgotten by its authors. One exception is the <a title="ekm" href="http://noble.gs.washington.edu/papers/schoelkopf_kernel.html">empirical kernel map</a> where one uses a <a title="svm linear" href="ftp://ftp.cs.wisc.edu/math-prog/talks/afosr.ps">generic design matrix</a> that is generated through the observations directly. </p>
<p>It was not until the paper by <a title="rks" href="http://books.nips.cc/papers/files/nips21/NIPS2008_0885.pdf">Rahimi and Recht</a>, 2008 on random kitchen sinks that this idea regained popularity. In a nutshell the algorithm works as follows: Draw d values \(c_i\) from the distribution over c. Use the corresponding basis functions in a linear model with quadratic penalty on the expansion coefficients. This method works whenever the basis functions are well bounded. For instance, for the Fourier basis the functions are bounded by 1. The proof of convergence of the explicit function expansion to the kernel is then a simple consequence of Chernoff bounds.</p>
<p>In the random kitchen sinks paper Rahimi and Recht discuss RBF kernels and binary indicator functions. However, this works more generally for any set of well behaved set of basis functions used in generating a random design matrix. A few examples:</p>
<ul><li>Fourier basis with Gaussian parameters. Take functions of the form \(e^{i w^\top x}\) where the coefficients \(w\) are drawn from a Gaussian. This is the random kitchen sinks paper. Obviously you can use hash functions rather than an actual random number generator. This ensures that you don’t need to store all parameters \(w\).</li>
<li>Pick random separating hyperplanes. This will effectively give you functions of bounded variation.</li>
<li>Use the empirical kernel map, i.e. we use some function \(k(x,x’)\) for which we employ for \(x’\) a random subset of the data we wish to train on.</li>
</ul>http://blog.smola.org/post/10572672684http://blog.smola.org/post/10572672684Fri, 23 Sep 2011 16:01:51 -0700Big Learning: Algorithms, Systems, and Tools for Learning at Scale<p class="p1">We’re organizing a <a title="Big Learning" href="http://www.biglearn.org">workshop at NIPS 2011</a>. Submission are solicited for a two day workshop December 16-17 in Sierra Nevada, Spain. </p>
<p class="p3">This workshop will address tools, algorithms, systems, hardware, and real-world problem domains related to large-scale machine learning (“Big Learning”). The Big Learning setting has attracted intense interest with active research spanning diverse fields including machine learning, databases, parallel and distributed systems, parallel architectures, and programming languages and abstractions. This workshop will bring together experts across these diverse communities to discuss recent progress, share tools and software, identify pressing new challenges, and to exchange new ideas. Topics of interest include (but are not limited to):</p>
<p class="p3"><strong>Hardware Accelerated Learning</strong>: Practicality and performance of specialized high-performance hardware (e.g. GPUs, FPGAs, ASIC) for machine learning applications.</p>
<p class="p3"><strong>Applications of Big Learning</strong>: Practical application case studies; insights on end-users, typical data workflow patterns, common data characteristics (stream or batch); trade-offs between labeling strategies (e.g., curated or crowd-sourced); challenges of real-world system building.</p>
<p class="p4"><strong>Tools, Software, & Systems</strong>: Languages and libraries for large-scale parallel or distributed learning. Preference will be given to approaches and systems that leverage cloud computing (e.g. Hadoop, DryadLINQ, EC2, Azure), scalable storage (e.g. RDBMs, NoSQL, graph databases), and/or specialized hardware (e.g. GPU, Multicore, FPGA, ASIC).</p>
<p class="p4"><strong>Models & Algorithms</strong>: Applicability of different learning techniques in different situations (e.g., simple statistics vs. large structured models); parallel acceleration of computationally intensive learning and inference; evaluation methodology; trade-offs between performance and engineering complexity; principled methods for dealing with large number of features; </p>
<p class="p4">Submissions should be written as extended abstracts, no longer than 4 pages (excluding references) in the <a title="LaTeX style" href="http://nips.cc/PaperInformation/StyleFiles">NIPS latex style</a>. Relevant work previously presented in non-machine-learning conferences is strongly encouraged. Exciting work that was recently presented is allowed, provided that the extended abstract mentions this explicitly. </p>
<p class="p4">Submission Deadline: September 30th, 2011.</p>
<p class="p4">Please refer to the <a title="Big Learning submission" href="http://biglearn.org/index.php/Authorinfo">website for detailed submission instructions</a>.</p>http://blog.smola.org/post/9604982818http://blog.smola.org/post/9604982818Tue, 30 Aug 2011 16:36:51 -0700Introduction to Graphical Models<p>Here’s a link to slides [<a title="MLSS Purdue" href="http://alex.smola.org/talks/purdue.key">Keynote</a>, <a title="MLSS Purdue" href="http://alex.smola.org/talks/purdue.pdf">PDF</a>] for a basic course on Graphical Models for the Internet that I’m giving at <a title="MLSS 2011" href="http://learning.stat.purdue.edu/mlss/mlss/start">MLSS 2011</a> in Purdue that Vishy Vishwanathan is organizing. The selection is quite biased, limited, and subjective, but it’s meant to complement the other classes at the summer school.</p>
<p>The slides are likely to grow, so in case of doubt, check for updates. Comments are most welcome. And yes, it’s a horribly incomplete overview, due to space and time constraints.</p>http://blog.smola.org/post/6631465935http://blog.smola.org/post/6631465935Fri, 17 Jun 2011 13:40:48 -0700Distributed synchronization with the distributed star<p>Here’s a simple synchronization paradigm between many computers that scales with the number of machines involved and which essentially keeps cost at \(O(1)\) per machine. For lack of a better name I’m going to call it the distributed star since this is what the communication looks like. It’s quite similar to how memcached stores its (key,value) pairs. </p>
<p>Assume you have n computers, each of which have a copy of a large parameter vector w (typically several GB) and we would like to keep these copies approximately synchronized.</p>
<p>A simple version would be to pause the computers occasionally, have them send their copies to a central node, and then return with a consensus value. Unfortunately this takes \(O(|w| \log n)\) time if we aggregate things on a tree (we can reduce it by streaming data through but this makes the code a lot more tricky). Furthermore we need to stop processing while we do so. The latter may not even be possible and any local computation is likely to benefit from having most up-to-date parameters. </p>
<p>Instead, we use the following: assume that we can break up the parameter vector into smaller (key, value) pairs that need synchronizing. We now have each computer send its local changes for each key to a central server, update the parameters there, and later receive information about global changes. So far this algorithm looks stupid - after all, when using n machines it would require \(O(|w| n)\) time to process since the central server is the bottleneck. This is where the distributed star comes in. Instead of keeping all data on a single server, we use the well known distributed hashing trick and send it to a machine n from a pool P of servers:</p>
<p>$$n(\mathrm{key}, P) = \mathop{\mathrm{argmin}}_{n \in P} ~ h(\mathrm{key}, n)$$</p>
<p>Here h is the hash function. Such a system spreads communication evenly and it leads to an \(O(|w| n/|P|)\) load per machine. In particular, if we make each of the computers involved in the local computation also members of the pool, i.e. if we have \(n = |P|\) we get an \(O(|w|)\) cost for keeping terms synchronized regardless of the number of machines involved. </p>
<p>Obvious approximations: we assume that all machines are on the same switch. Moreover we assume that the times to open a TCP/IP connection are negligible (we keep them open after the first message) relative to the work to transmit the data. </p>
<p>The reason I’m calling this a distributed star is that for each key we have a star communication topology, it’s just that we use a different star for each key. If anyone in systems knows what this thing is really called, I’d greatly appreciate feedback. Memcached uses the same setup, alas it doesn’t have versioned writes and callbacks, so we had to build our own system using <a title="ICE" href="http://www.zeroc.com">ICE</a>.</p>http://blog.smola.org/post/6361194871http://blog.smola.org/post/6361194871Thu, 09 Jun 2011 13:01:00 -0700Speeding up Latent Dirichlet Allocation<p>The code to our LDA implementation on Hadoop is released on <a title="Yahoo LDA" href="https://github.com/shravanmn/Yahoo_LDA">Github</a> under the Mozilla Public License. It’s seriously fast and scales very well to 1000 machines or more (don’t worry, it runs on a single machine, too). We believe that at present this is the fastest implementation you can find, in particular if you want to have a) 1000s of topics, b) a large dictionary, c) a large number of documents, and d) Gibbs sampling. It handles quite comfortably a billion documents. Shravan Narayanamurthy deserves all the credit for the code. The paper describing an earlier version of the system appeared in <a title="VLDB paper" href="http://www.vldb.org/pvldb/vldb2010/pvldb_vol3/R63.pdf">VLDB 2010</a>. </p>
<p>Some background: Latent Dirichlet Allocation by Blei, Jordan and Ng <a title="JMLR paper" href="http://jmlr.csail.mit.edu/papers/volume3/blei03a/blei03a.pdf">(JMLR 2003)</a> is a great tool for aggregating terms beyond what simple clustering can do. While the original paper showed exciting results it wasn’t terribly scalable. A significant improvement was the collapsed sampler of Griffiths and Steyvers <a title="Collapsed Sampler" href="http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf">(PNAS 2004)</a>. The key idea was that in an exponential families model with conjugate prior you can integrate out the natural parameter, thus providing a sampler that mixed much more rapidly. It uses the following update equation to sample the topic for a word.</p>
<p>$$p(t|d,w) \propto \frac{n^*(t,d) + \alpha_t}{n^*(d) + \sum_{t’} \alpha_{t’}} \frac{n^*(t,w) + \beta_w}{n^*(t) + \sum_{w’} \beta_{w’}}$$</p>
<p>Here t denotes the topic, d the document, w the word, and \(n(t,d), n(d), n(t,w), n(t)\) denote the number of words which satisfy a particular (topic, document), (document), (topic, word), (topic) combination. The starred quantities such as $n^*(t,d)$ simply mean that we use the counts where the current word for which we need to resample the topic is omitted. </p>
<p>Unfortunately the above formula is quite slow when it comes to drawing from a large number of topics. Worst of all, it is nonzero throughout. A rather ingenious trick was proposed by Yao, Mimno, and McCallum <a title="fast sampler" href="http://www.cs.umass.edu/~mimno/papers/fast-topic-model.pdf">(KDD 2009)</a>. It uses the fact that the relevant terms in the sum are sparse and only the \(\alpha\) and \(\beta\) dependent terms are dense (and obviously the number of words per document doesn’t change, hence we can drop that, too). This yields</p>
<p>$$p(t|d,w) \propto \frac{\alpha_t \beta_w}{n^*(t) + \sum_{w’} \beta_{w’}} + n^*(t,d) \frac{n^*(t,w) + \beta_w}{n^*(t) + \sum_{w’} \beta_{w’}} + \frac{n^*(t,d) n^*(t,w)}{n^*(t) + \sum_{w’} \beta_{w’}}$$</p>
<p>Out of these three terms, only the first one is dense, all others are sparse. Hence, if we knew the sum over \(t\) for all three summands we could design a sampler which first samples which of the blocks is relevant and then which topic within each of these blocks. This is efficient since the first term doesn’t actually depend on \(n(t,w)\) or \(n(t,d)\) but rather only on \(n(t)\) which can be updated efficiently after each new topic assignment. In other words, we are able to update dense term in O(1) operations after each sampling step and the remaining terms are all sparse. This trick gives a 10-50 times speedup in the sampler over a dense representation.</p>
<p>To combine several machines we have two alternatives: one is to perform one sampling pass over the data and then reconcile the samplers. This was proposed by Newman, Asuncion, Smyth, and Welling <a title="asuncion jmlr paper" href="http://www.ics.uci.edu/~asuncion/pubs/JMLR_09.pdf">(JMLR 2009)</a>. While the approach proved to be feasible, it has a number of disadvantages. It only exercises the network while the CPU sits idle and vice versa. Secondly, a deferred update makes for slower mixing. Instead, one can simply have each sampler communicate with a distributed central storage continuously. In a nutshell, each node sends the differential to the global statekeeper and receives from it the latest global value. The key point is that this occurs asynchronously and moreover that we are able to decompose the state over several machines such that the available bandwidth grows with the number of machines involved. More on such distributed schemes in a later post.</p>http://blog.smola.org/post/6359713161http://blog.smola.org/post/6359713161Thu, 09 Jun 2011 12:02:00 -0700Bloom Filters<p>Bloom filters are one of the really ingenious and simple building blocks for randomized data structures. A great summary is the paper by <a title="Bloom Filter Review" href="http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.im/1109191032">Broder and Mitzenmacher</a>. In this post I will briefly review its key ideas since it forms the basis of the <a title="Countmin" href="https://sites.google.com/site/countminsketch/">Count-Min sketch</a> of Cormode and Muthukrishnan, it will also be necessary for an accelerated version of the graph kernel of <a title="Nino's NIPS 2009 talk" href="http://videolectures.net/nips09_shervashidze_fsk/">Shervashidze and Borgwardt</a>, and finally, a similar structure will be needed to compute data streams over time for a real-time sketching service.</p>
<p>At its heart a bloom filter uses a bit vector of length N and a set of k hash functions mapping arbitrary keys x into their hash values \(h_i(x) \in [1 .. N]\) where \(i \in \{1 .. k\}\) denotes the hash function. The Bloom filter allows us to perform approximate set membership tests where we have no false negatives but we may have a small number of false positives. </p>
<p>Initialize(b): Set all \(b[i] = 0\)</p>
<p>Insert(b,x): For all \(i \in \{1 .. k\}\) set \(b[h_i(x)] = 1\)</p>
<p>Query(b, x): Return true if \(b[h_i(x)] = 1\) for all \(i \in \{1 .. k\}\), false otherwise</p>
<p>Furthermore, unions and intersections between sets are easily achieved by performing bit-wise OR and AND operations on the bloom hashes of the corresponding sets respectively.</p>
<p>It is clear that if we inserted x into the Bloom filter the query will return true, since all relevant bits in b are 1. To analyze the probability of a false positive take the probability of a bit being 1. After inserting m items using k hash functions on a range of N we have</p>
<p>$$\Pr(b[i] = \mathrm{TRUE}) = 1 - (1 - \frac{1}{N})^{k m} \approx 1 - e^{-\frac{km}{N}}$$</p>
<p>For a false positive to occur we need to have all k bits associated with the hash functions to be 1. Ignoring the fact that the hash functions might collide the probability of false positives is given by</p>
<p>$$p \approx (1 - e^{-\frac{km}{N}})^k$$</p>
<p>Taking derivatives with respect to \(\frac{km}{N}\) shows that the minimum is obtained for \(\log 2\), that is \(k = \frac{N}{m} \log 2\).</p>
<p>One of the really nice properties of the Bloom filter is that all memory is used to store the information about the set rather than an index structure storing the keys of the items. The downside is that it is impossible to read out b without knowing the queries. Also note that it is impossible to remove items from the Bloom filter once they’ve been inserted. After all, we do not know whether some of the bits might have collided with another key, hence setting the corresponding bits to 0 would cause false negatives. </p>http://blog.smola.org/post/4206530042http://blog.smola.org/post/4206530042Wed, 30 Mar 2011 03:47:00 -0700Real simple covariate shift correction<p>Imagine you want to design some algorithm to detect cancer. You get data of healthy and sick people; you train your algorithm; it works fine giving you high accuracy and you conclude that you’re ready for a successful career in medical diagnostics.</p>
<p>Not so fast …</p>
<p>Many things could go wrong. In particular, the distributions that you work with for training and those in the wild might differ considerably. This happened to an unfortunate startup I had the opportunity to consult for many years ago. They were developing a blood test for a disease that affects mainly older men and they’d managed to obtain a fair amount of blood samples from patients. It is considerably more difficult, though, to obtain blood samples from healthy men (mainly for ethical reasons). To compensate for that, they asked a large number of students on campus to donate blood and they performed their test. Then they asked me whether I could help them build a classifier to detect the disease. I told them that it would be very easy to distinguish between both datasets with probably near perfect accuracy. After all, the test subjects differed in age, hormone level, physical activity, diet, alcohol consumption, and many more factors unrelated to the disease. This was unlikely to be the case with real patients: Their sampling procedure had caused an extreme case of covariate shift that couldn’t be corrected by conventional means. In other words, training and test data were so different that nothing useful could be done and they had wasted significant amounts of money. </p>
<p>In general the situation is not quite so dire. Assume that we want to estimate some dependency \(p(y|x)\) for which we have labeled data \((x_i, y_i)\). Alas, the observations \(x_i\) are drawn from some distribution \(q(x)\) rather than the ‘proper’ distribution \(p(x)\). If we adopt a risk minimization approach, that is, if we want to solve</p>
<p>$$\mathrm{minimize}_{f} \frac{1}{m} \sum_{i=1}^m l(x_i, y_i, f(x_i)) + \frac{\lambda}{2} \|f\|^2$$</p>
<p>we will need to re-weight each instance by the ratio of probabilities that it would have been drawn from the correct distribution, that is, we need to reweight things by \(\frac{p(x_i)}{q(x_i)}\). This is the ratio of how frequently the instances would have occurred in the correct set vs. how frequently it occurred with the sampling distribution \(q\). It is sometimes also referred to as the Radon-Nikodym derivative. Such a method is called importance sampling and the following derivation shows why it is valid:</p>
<p>$$\int f(x) dp(x) = \int f(x) \frac{dp(x)}{dq(x)} dq(x)$$</p>
<p>Alas, we do not know \(\frac{dp(x)}{dq(x)}\), so before we can do anything useful we need to estimate the ratio. Many methods are available, e.g. some rather fancy operator theoretic ones which try to recalibrate the expectation operator directly using a minimum-norm or a maximum entropy principle. However, there exists a much more pedestrian, yet quite effective approach that will give almost as good results: logistic regression. </p>
<p>After all, we know how to estimate probability ratios. This is achieved by learning a classifier to distinguish between data drawn from \(p\) and data drawn from \(q\). If it is impossible to distinguish between the two distributions then it means that the associated instances are equaly likely to come from either oneof the two distributions. On the other hand, any instances that can be well discriminated should be significantly over/underweighted accordingly. For simplicity’s sake assume that we have an equal number of instances from both distributions, denoted by \(x_i \sim p(x)\) and \(x_i’ \sim q(x)\) respectively. Now denote by \(z_i\) labels which are 1 for data drawn from \(p\) and -1 for data drawn from \(q\). Then the probability in a mixed dataset is given by</p>
<p>$$p(z=1|x) = \frac{p(x)}{p(x) + q(x)}$$</p>
<p>Hence, if we use a logistic regression approach which yields \(p(z=1|x) = \frac{1}{1 + e^{-f(x)}}\), it follows (after some simple algebra) that </p>
<p>$$\frac{p(z=1|x)}{p(z=-1|x)} = e^{f(x)}.$$</p>
<p>Now we only need to solve the logistic regression problem</p>
<p>$$\mathrm{minimize}_f \frac{1}{2m} \sum_{(x,z)} \log [1 + \exp(-z f(x))] + \frac{\lambda}{2} \|f\|^2$$</p>
<p>to obtain \(f\). Subsequently we can use \(e^{f(x_i)}\) as covariate shift correction weights in training our actual classifier. The good news is that we can use an off-the-shelf tool such as logistic regression to deal with a decidedly nonstandard estimation problem. </p>http://blog.smola.org/post/4110255196http://blog.smola.org/post/4110255196Sat, 26 Mar 2011 09:44:00 -0700Graphical Models for the Internet<p>Here are a few tutorial slides I prepared with <a title="Amr Ahmed" href="http://www.cs.cmu.edu/~amahmed/">Amr Ahmed</a> for <a title="WWW 2011" href="http://www.www2011india.com/">WWW 2011</a> in Hyderabad next week. They describe in fairly basic (and in the end rather advanced) terms how one might use graphical models for the amounts of data available on the internet. Comments and feedback are much appreciated. </p>
<p><a title="WWW 2011 tutorial" href="http://alex.smola.org/drafts/www11-1.pdf">PDF</a> <a title="WWW 2011 tutorial slides" href="http://alex.smola.org/drafts/www11-1.key">Keynote</a></p>http://blog.smola.org/post/4075687192http://blog.smola.org/post/4075687192Thu, 24 Mar 2011 19:02:57 -0700Memory Latency, Hashing, Optimal Golomb Rulers and Feistel Networks<p>In many problems involving hashing we want to look up a range of elements from a vector, e.g. of the form \(v[h(i,j)]\) for arbitrary \(i\) and for a range of \(j \in \{1, \ldots, n\}\) where \(h(i,j)\) is a hash function. This happens e.g. for multiclass classification, collaborative filtering, and multitask learning. </p>
<p>While this works just fine in terms of estimation performance, traversing all values of j leads to an algorithm which is horrible in terms of memory access patterns. Modern RAM chips are much faster (over 10x) when it comes to reading values in sequence than when carrying out random reads. Furthermore, random access destroys the benefit of a cache. This leads to algorithms which are efficient in terms of their memory footprint, however, they can be relatively slow. One way to address this is to bound the range of \(h(i,j)\) for different values of j. Here are some ways of how we could do this:</p>
<ol><li>Decompose \(h(i,j) = h(i) + j\). This is computationally very cheap, it has good sequential access properties but it leads to horrible collisions should there ever be two \(i\) and \(i’\) for which \(|h(i) - h(i’)| \leq n\). </li>
<li>Decompose \(h(i,j) = h(i) + h’(j)\) where \(h’(j)\) has a small range of values. <br/>This is a really bad idea since now we have a nontrivial probability of collision as soon as the range of \(h’(j)\) is less than \(n^2\) due to the birthday paradox. Moreover, for adjacent values \(h(i)\) and \(h(i’)\) we will get many collisions.</li>
<li>Decompose \(h(i,j) = h(i) + g(j)\) where \(g(j)\) is an <a title="Optimal Golomb Ruler" href="http://en.wikipedia.org/wiki/Golomb_ruler">Optimal Golomb Ruler</a>.<br/>The latter is an increasing sequence of integers for which any pairwise distance occurs exactly once. In other words, the condition \(h(a) - h(b) = h(c) - h(d)\) implies that \(a = c\) and \(b = d\). <a title="John Langford" href="http://hunch.net/~jl">John Langford</a> proposed this to address the problem. In fact, it solves our problem since there are a) no collisions for a fixed \(i\) and b) for neighboring values \(h(i)\) and \(h(i’)\) we will get at most one collision (due to the Golomb ruler property). Alas, this only works up to \(n=26\) since finding an Optimal Golomb Ruler is hard (it is currently unknown whether it is actually NP hard).</li>
<li>An alternative that works for larger n and that is sufficiently simple to compute is to use cryptography. After all, all we want is that the hash function \(h’(j)\) has small range and that it doesn’t have any self collisions or any systematic collisions. We can achieve this by encrypting j using the key i to generate an encrypted message of N possible values. In other words we use<br/>$$h(i,j) = h(i) + \mathrm{crypt}(j|i,N)$$<br/>Since it is an encryption of j, the mapping is invertible and we won’t have collisions for a given value of j. Furthermore, for different i the encodings will be uncorrelated (after all, i is the key). Finally, we can control the range \(N>n\) simply by choosing the encryption algorithm. In this case the random memory access is of bounded range, hence the CPU cache will not suffer from too many misses.</li>
</ol><p>A particularly nice algorithm is the <a title="Feistel cipher" href="http://en.wikipedia.org/wiki/Feistel_cipher">Feistel cipher</a>. It works as follows: define the iterative map</p>
<p>$$f(x,y) = (y, x \mathrm{XOR} h(y))$$</p>
<p>Here \(h\) is a hash function. After 4 iterations \((x,y) \to f(x,y)\) we obtain an encryption of \((x,y)\). Now use \(x=i\) and \(y = j\) to obtain the desired result. Basically we are trading off memory latency with computation (which is local).</p>http://blog.smola.org/post/3243371889http://blog.smola.org/post/3243371889Fri, 11 Feb 2011 17:56:00 -0800Collaborative Filtering considered harmful<p>Much excellent work has been published on collaborative filtering, in particular in terms of recovering missing entries in a matrix. The Netflix contest has contributed a significant amount to the progress in the field. </p>
<p>Alas, reality is not quite as simple as that. Very rarely will we ever be able to query a user about arbitrary movies, books, or other objects. Instead, user ratings are typically expressed as <em>preferences</em> rather than absolute statements: a preference for <em>Die Hard</em> given a generic set of movies only tells us that the user appreciates action movies; however, a preference for <em>Die Hard</em> over <em>Terminator</em> or <em>Rocky</em> suggests that the user might favor Bruce Willis over other action heroes. In other words, the context of user choice is vital when estimating user preferences. </p>
<p>Hence if we attempt to estimate scores \(s_{ui}\) of user \(u\) regarding item \(i\) it is important to use the context within which the ratings have been obtained. For instance, if we are given a session of items \((i_1, \ldots, i_n)\) out of which item \(i^*\) was selected we might want to consider a logistic model of the form:</p>
<p>$$-\log p(i^*|i_1, \ldots, i_n) = \log \left[\sum_{i=1}^n e^{s_{ui}} \right] - s_{ui^*}$$</p>
<p>The option of no action is easy to add, simply by adding the null score \(s_{u0}\) which captures the event of no action by a user.<br/><a title="Shuang Hong" href="http://www.cc.gatech.edu/~syang46/">Shuang Hong</a> tried out this idea to get a significant performance improvement on a number of collaborative filtering datasets. Bottom line - make sure that the problem you’re solving is actually the one that a) generated the data and b) that will help you in practice. That is, in many cases <em>matrix completion is not the problem </em>you want to solve, even though it might win you benchmarks.</p>http://blog.smola.org/post/3241732437http://blog.smola.org/post/3241732437Fri, 11 Feb 2011 16:18:00 -0800Why<p>Some readers might wonder why I’m writing this blog. Here’s an (incomplete) list:</p>
<ul><li>It’s fun.</li>
<li>There are lots of fantastic blogs discussing the philosophy and big questions of machine learning (e.g. John Langford’s <a title="Hunch" href="http://hunch.net">hunch.net</a>) but I couldn’t find many covering simple tricks of the trade.</li>
<li>Scientific papers sometimes obscure simple ideas. In the most extreme case, a paper will get rejected if the idea is presented in too simple terms (it happened to me more than once and the paper was praised once the simple parts had been obfuscated). Also, they need to come with ample evidence for why an idea works, strong theoretical guarantees and lots of experiments. This is all needed as a safeguard and it’s really really important. But it often hides the basic idea.</li>
<li>Some ideas are really cute and useful but not big enough to write a paper about. It’s pointless to write 10 pages if the idea can be fully covered in 1 page. We’d need a journal of 1 page ideas to deal with this.</li>
<li>Many practitioners are scared to pick up a paper with many equations but they might be willing to spend 10 minutes reading a blog post.</li>
</ul>http://blog.smola.org/post/1130285201http://blog.smola.org/post/1130285201Wed, 15 Sep 2010 21:17:43 -0700