Day 2 from USENIX: Cybercrime economics and earthquake detecting smartphones

My second day at USENIX had quite a few interesting presentations. The opening keynote for the Annual Technical Conference was regarding the economics behind cybercrime. The speaker explained the “industry” behind malware but more particularly spam. His research group infiltrated a botnet responsible for spam in order to better quantity the generated revenue. While the click-through rate from spam is very low, the sheer scale of the operation makes it a very lucrative business. He also went over briefly the CAPTCHA market and how crowd-sourcing is used to to break this. To sum up this operation, a customer asks for email addresses from webmail services (hotmail, yahoo, gmail) in order for them to use those email addresses to send spam which will not be possible to efficiently block as we can’t really block those domains. In order to get those email addresses, a CAPTCHA must be converted to text. So the criminal group capture the CAPTCHA, sends it to a queue where someone will process it and send back the text. Those people process CAPTCHA all day long for a ridiculously low cost (see 10$ per day). Criminals also try to use more automated means to process CAPTCHA but it turns out the accuracy is not very high and it’s fairly time consuming/expensive to develop image processing algorithms. It’s much easier to change the generating algorithm than the opposite.

Another interesting fact from their research is it turns out that most of the spam business in the world uses about 4 different banks. Since it’s fairly difficult for them to change banks as not all banks will accept those high risk activities, it would be possible to effectively make it very difficult for criminals to process sales and move money around. To get there, it will take some time because of the politics involved as those bank are usually in emerging countries with “flexible laws” :-).

The morning concluded with interesting presentations about the use of GPUs. The first presentation was regarding more efficient scheduling of commands on the GPU using a custom driver and the second was regarding the virtualization of GPUs to make them available as a processing unit to virtual machines. This solution also involved a custom scheduler to ensure the GPU are used to their maximum capacity and to provide QoS if required.

During the afternoon there was an interesting presentation from a guy from HP Labs regarding a tool that proposes network topologies. Basically you feed the tool with certain properties that you want to have for your network (available bandwidth for instance), number of servers, etc. The tool will then generate a list of possible network topologies and weed out the ones that might not make sense using heuristics. For instance, it can show you the effect on cost of using 24 or 48 ports switches on cost and bandwidth. There are also plans to integrate more parameters in the automated analysis such as power and cooling for instance.

An other interesting presentation in the afternoon had to do with in place log data processing. Instead of moving large amount of log data to a central location for analysis, the software performs a large portion of the data manipulation and aggregation directly on the server where the log data is generated. There are also options to control the tolerance of freshness and completeness of the data while executing the data processing jobs. To ensure minimal impact on the primary services running on those hosts, throttling mechanisms are used with minimal overhead.

The last presentation of the day was regarding a research about the use of low cost accelerometers such as the ones found in smartphones in order to detect seismic activity. The high number of potential sensors can be used by first respondents to better identify the area with the most potential damage. It can also be used an early warning mechanism. Earthquakes typically have a primary wave that precedes the damaging seismic wave. There’s typically a 1 minute delay between those two that would allow basic actions to be taken prior to the damaging wave such as allowing people to get out of elevators, opening fire stations doors to minimize further delays of first responders. Using a large array of low cost sensors such as smart phones would also allow the capture of higher resolution data that would more accurately represent the seismic wave patterns.

The presenter concluded their presentation by showing an example of how tablets and smartphones could be used to perform basic medical diagnosis in emerging countries using low cost equipment.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.