Donations: By The Numbers

I was recently asked by someone about the donations that rfxn.com receives, more specifically what it amounts to. In the interest of answering this person and anyone else who may be curious, I thought I would put together a small post about it.

Firstly, what needs to be said is that although the projects have been active for nearly 8 years and will pass 700,000 downloads sometime in January 2011, I have only been accepting donations since late January of 2006. Around this time, is when rfxn.com had a shift from a for-profit managed services provider which offered our projects as a contribution back to the community, to simply a community site dedicated to the projects as I moved into full-time employment.

In that time, almost 5 years, there has been less than 100 donations to the projects (73 to be exact) or an average of 14 donations per year or little more than 1 per month. The average donation amount is $35 and there are typically only two donations per year totaling more than $100. The yearly donation average is $521 and monthly average is $43. There has only been 3 repeat donors and they donate on average of once every 18 months. The sum of all donations to date since January of 2006 is $2,608 USD, with $73 of that going towards transaction fees.

The context of this is that there are currently 10 maintained projects, comprising 23,706 lines of code, of which the projects receive an average of 8,000 downloads per month. There has been a little over 690k downloads to date and of the projects that access rfxn.com servers regularly we can derive that there is currently at least 24,629 unique IP addresses (servers) running one or more projects (only 3 projects regularly access data on rfxn.com post-install, so that figure is an order of magnitude larger relative to total downloads).

I have recently added a donation roll page that lists every donation to rfxn.com projects to date and a widget on all pages that displays the 5 most recent donations. This is an effort to further acknowledge the minority of individuals that contribute financially to the projects and to be completely transparent about donations. As the above clearly shows, the donations are not a means for survival of the projects let alone a means for me personally to survive, the projects are developed in my spare time between my full-time job and life. This spare-time nature to maintaining the projects has over the years resulted in a fair share of less-than-constructive comments about the projects and frequency of updates, although some of these comments do have merit, in general the projects are mature, stable and have stood the test of time to prove they are relevant just as much today as they were yesterday.

I hope this post has helped to better illustrate the personal commitment I have towards the projects, what (financial) incentives exist surrounding the projects and to provide a clear and transparent account of donations.

LMD: One Year Later

With my move back to Canada behind me and adjusting to some new routines with life, its about time to get back into the mix with the projects. Though things have been slow the last couple of months, it has not stopped me from making sure regular and prompt malware updates are released.

Today, we reflect on the first year of Linux Malware Detect, which was released in a very infantile beta release about a year ago. The project has evolved in allot of ways from its original goals, it has certainly changed in every way for the better. What was originally to be a closed project, relegated to mostly internal work related needs, ended up like most of my projects morphing into a public release. The first release saw the world with less than 200 signatures, no reliable signature update method, manual upgrade options and very flawed scanning and detection methods (v 0.7<). Now, we sit at version 1.3.6, with 4,813 signatures, a scanning method that though still needs some work, is far superior than what was originally in place, a detection routine based on solid md5 hashes and hex signatures. We have cleaner rules that can clean some nasty injected malware, we got a fully functional quarantine and restore system, reporting system, real-time file based monitoring, integrated signature updater and version updater and a vibrant community of users that regularly submit malware for review. Yes, LMD has grown up!

The most grown-up part of LMD has to be how signatures are handled and how the processing of them is almost an entirely automated process now, this was detailed a little more in Signature Updates & Threat Database posted in September. The key part here though, is “almost entirely automated”, everyday that the processing scripts run to bring in new malware, there is always a number of files that cant be processed automatically and these are moved to a manual review queue. With how busy life has been the last couple of months, the review queue has slowly risen to 1,097 files pending review. This queue is at the top of my list for tackling over the next couple of days and weeks, its allot of work to review that much malware but it will get done. Many of the files to review are actual user-submissions so if you did submit something and find its still not detected by LMD, this would be why :).

There is still allot on the to-do list for LMD going forward, with the upcoming release of version 2.0 we will see some changes in how LMD does business. The first and to me the biggest will be optional usage statistics, which will allow users to have LMD report anonymized statistics back to rfxn.com. These statistics will show us which malware hits are found on your servers, which in turn contributes towards better focus on what type of malware threats are prioritized in the daily processing queue for hashing & review. The statistics will also help create informative profiles on the soon-to-be-released dailythreats.org web site about how maldet is used and what are the most prevalent threats in the wild.

Other additions to LMD 2.0 will be a refined scanner that will provide greater speed with large file sets (50k – 1M+ files), an ability to fork scans to the background, better and more predictable logging format for 3rd party processing of LMD log data, redesigned reporting system, full BSD support, ability to create custom signatures from the LMD command line, expanded cleaner rules, wildcard support for exclude paths, a number of security and bug refinements and as always, more signatures.

If you have any feature requests for LMD 2.0, go ahead and post them as a comment and I will make sure they get added to the list. Thank you to everyone who continues to support rfxn.com projects through donations, feedback and by just using & spreading the word about the projects. I look forward to another year of LMD and seeing it become the premier malware detection tool for Linux and all Unix variant OS’s.

Signature Updates & Threat Database

It has been a very active month for those that pay attention to the signatures as they are released, you might have noticed a sudden spike about two weeks ago in signatures from 2,500’ish to the now 4,425 mark. The vast majority of these signatures were put up in MD5 format as a great many are variants of “known” malware and were extracted through processing historical threat data for the last 90 days, sorted by unique hashes, from clean-mx.de. I also did some leg work in my processing scripts which has allowed them to handle base64 and gzip decoding of POST payloads from IPS data which is generating a marked increase in new malware and known malware variants. Together, this has added 1806 MD5 and 31 HEX signatures in the last 45 days bringing us to the current mark of 4425 (2808 MD5 / 1617 HEX) total signatures.

In addition to the above, the daily processing scripts have been rewritten and combined into a single task on the processing server, this has brought together what was previously 9 different scripts into a single, streamlined and much more efficient task. The reason that things got to the point where there was 9 different scripts to update various elements of the back end processing server is that the LMD project developed very fluidly over the last year, meaning that every time I had a new idea or added a new feature, I in turn created a new script to support the idea/feature — over time this naturally was not sustainable and now what we have is exactly that — sustainable.

For those interested, here is the output report generated and sent to my inbox at the end of each daily malware update task:

started daily malware update tasks at 2010-09-13 00:09:35
running daily malware fetch... finished in 710s
running daily ftp malware fetch... finished in 6s
regenerating signatures from daily malware HEX hits... finished in 95s
propagating signature files... finished in 2s
generating sqlfeed data... finished in 88s
running mysql inserts for sqlfeed on praxis... finished in 42s
syncing & updating malware source data (master-urls.dat).... finished in 27s
syncing & updating irc c&c nets... finished in 15s
rebuilding maldetect-current... finished in 3s
pushing maldetect-current and signatures to web... finished in 4s
completed daily malware update tasks at 2010-09-13 00:26:05 (990s)
processed 156 malware url's
retrieved 40 malware files
extracted and hashed 16 new signatures
extracted 59 new irc c&c networks
queued 24 unknown files for review

An important part to streamlining the daily update tasks was also in rewriting some of the basic processing scripts to better log and store information on malware sources, such information includes date, source url, file md5, sig name, top level domain, online state, ip, asn, netowner and more. All malware is also now processed through an IRC extraction script that checks for irc server details in malware files and adds it to a irc command & control list with details such as date, source file md5, source file sig name, irc server, irc port, irc chan, online state, ip, asn, netowner and more. The “online state” fields in both the malware source and IRC c&c databases perform active checks, for the malware source this is simply verifies a URL is still active and/or domain still resolves, for the IRC c&c database this is a bot that manually connects to the irc network and verifies the network and channels are online & populated. All irc users, host masks and a sampling period of channel activity is also recorded from each active IRC c&c network, this information at this time is not included in the database as allot of it requires sanitizing as many IRC c&c networks dont mask connecting hosts and the channel activity reveals exceedingly sensitive information about actively vulnerable web sites and servers, this is something I am working on adding but its a difficult task so it will take some time. The malware signatures database has also been populated but requires a little more work, mainly adding meta data to describe each signature in a format that is longer than the single-word descriptions included in the signature naming scheme.

Together, the malware signature database, the malware source database and the IRC C&C networks database will all tie together into a single threat portal to be released in the next couple of weeks (I hope) allowing correlation between data in all 3 databases seamlessly. For example one could query all malware sourced from a specific IP, ASN or Netowner or you could find all the source URL’s for a specific malwares MD5 signature, or you could query the signature database to find more information on a specific signature, etc… there are a great many options that will be available for reviewing, cross referencing and exporting data from the databases.

These databases are all already completed, active and receiving updates, all that is left for me to do is create the front end that will find its home on http://www.dailythreats.com. The signature database, as expected, has 4,526 entries, the malware source database has 7,859 entries and the IRC C&C database has 386 entries. There is currently 511 files pending review in the malware queue, there has been 3,592 malware files reviewed in the last 45 days, of those 1,806 were unique files and the 511 files in queue for review represent files that could not be auto-hashed against a known threat or variant threat from HEX pattern matches.

The biggest pitfall of all these changes has been the explosion in the review queue that I must tend with daily, it has started to back up on me as I am in the middle of moving from Michigan to Montreal but as soon as I am done with my move in a couple of weeks, I plan to get that queue under control and work on some more back end scripts to help streamline its processing slightly.

Well that’s it for now, keep an eye out for details to come on the dailythreats.com site, its going to be exciting 🙂

Tracking & Killing Bot Networks

In a previous blog I discussed how one of the more enjoyable parts of my day-to-day malware rituals also involves the tracking and killing of command and control bot networks. Recently I have begun automating this process a bit; I have created a series of scripts that extract irc servers, port numbers and channels from malware as it comes in and then checks if the irc server is still online, a custom bot then logs into the server, queries the active channels and determines how many zombies are active on the network. If an irc server is determined to be active with zombies actively connected, the server is then reported to the abuse address listed in the whois information for the servers IP Address.

The automation of this process is something I have had on my todo list for a little while but finally stopped procrastinating it and got it done. The real advantage of it being automated now is I can easily generate a tangible set of information that allows for me to see how many bot networks are present in the malware I process daily, weekly and monthly, how many of those networks are still active and more importantly how many of those networks have active zombies still connected. Likewise, as I’ve discussed previously, I am working on a threat portal and having the irc c&c data processing automated will more easily allow me to put that information on the threat portal and integrate it into the aggregate threat feed that the portal will offer for route/firewall/DNSBL drops.

Here are some statistics on IRC command and control networks as seen in the malware processed by me in the last 30 days:
Total Processed Malware (30d): 607
Total IRC C&C Servers: 251
Total Online IRC C&C Servers (as of 08/17/10): 118
Total Online IRC C&C Servers with Active Zombie Hosts: 30
Total Zombies Observed on Online IRC C&C Servers: 1,679 (55 average per server)

There are some notable observations, out of the total of 251 noted IRC C&C servers, only 118 of them are still online, of those 118 that are still active, 64 of them utilize free DNS naming services and/or dynamic dns services, the other 54 create C&C channels on established public IRC networks or use the DNS name of compromised hosts running an IRC server. Most every one of the 133 now inactive IRC servers used IP addresses within the host malware script, a small majority used DNS names of compromised hosts.

It goes without saying that by using public DNS services / dynamic DNS services, it allows attackers the flexibility to quickly recover a C&C server and its participating zombies in the event of the host server being shutdown. Further, a number of more mature IRC C&C bots will continue reconnection attempts periodically when disconnected from the host C&C server, further increasing the chance of fully recovering the zombie network for the attacker.

Also increasingly, PHP is becoming more common as a language of choice for C&C bot agents, though Perl agents are still vastly more popular. The LMD project currently has classified 44 unique C&C bot agents comprising 286 agent scripts/binaries, 14 classes or 38 scripts of which are PHP based and 21 classes or 213 scripts of which are Perl based, 9 classes or 35 scripts/binaries being Other (c/ruby/java).

Currently there is an average of 6 bot networks being abuse reported per day, of those only about 2-3 per day ever receive any form of followup and/or shutdown of the host running the network. That is a rate of less than 50% on average, which is abysmal to say the least. When the threat management portal goes up in the coming weeks, these networks will find themselves at the top of the threat feed and planted squarely on the front page of the portal — we might not be able to shut them down but we sure can filter them off our networks.

Understanding Signatures

The signature naming scheme for LMD is a little confusing and something I’ve received more than a few questions about, more so about what the *.unclassed signatures mean. The naming scheme (to me) is straight forward and breaks down as follows:

{SIG_FORMAT}lang/vector.type.name.ID#

The ‘SIG_FORMAT’ is either HEX or MD5 reflecting the internal format of the signature, the ‘lang/vector’ is the language or attack vector of the malware, ‘type’ is a short descriptive field for what the malware does (i.e: ircbot, mailer, injection etc…), ‘name’ is a short descriptive name unique to the piece of malware and ‘ID#’ is the internal signature ID number.

What some people appear confused about is signatures such as ‘{HEX}base64.inject.unclassed.7’ that use the term “unclassed” for the name field. Essentially, signatures that are unclassed represent a group of malware that is not necessarily unique from each other but that follows the same attack vector, such as base64 encoded scripts; there are hundreds of these scripts and in encoded form it doesn’t really matter what they do, we are detecting the encoded format not the decoded, so they get lumped together. In other instances, I will throw some malware into an unclassed group when it is very new and I have not had time yet for processing it into its own classification, for example the web.malware.unclassed is a dumping ground for allot of malware that is newly submitted, which I have reviewed and confirmed IS MALWARE but have not yet classified it or determined if it is a variant of an existing malware classification.

It needs to be understood that the processing of malware is mostly a manual task, though there are some elements of it that are automated, the actual review of each malware file is done by hand to remove the chance of false positives — keeping LMD accurate and reliable. As such, not all malware makes it into a classification group right away, the important part is that malware is reviewed, verified and signatures generated for it in a timely fashion. I process malware daily from the network edge IPS system at work, from user submitted files and from various malware news groups / web sites and the priority is getting the signatures up for in the wild threats. The signature name/classification serves informative purposes, yes it is important but not as important as the actual verification and signature generation.