Ben Reardon - Member
Chris Horsley - Member
Gerald O'Reilly - Member
David Zielezna - Member
Kayne Naughton - Member
Shaun Vlassis - Chapter Lead
Changes in the structure of your chapter
We have been very lucky to increase the number of full time members of the team this last 12 months.
Kayne, Gerald and Chris joined the ranks of the team.
We currently have a few disparate network honeypots deployed however are in the process of redesigning how we collect/share and distribute the information we gather.
Research & Development
Both Shaun,Chris and Ben have been involved in the recent GSoC projects with Chris mentoring a project related to Splunk attack graph visualisation, Shaun mentoring a project on Network Sinkholing and Ben mentoring a project on webviz.
We are actively looking at all our existing systems, many of which require a lot of TLC, and updating/rewriting so that they are easier to maintain, use and open up to more local collaborators that historically has been difficult to bring on board.
Interactions with the security community
We have recently this year created a new security discussion mailing list for Aussies in the Incident Responder/Malware analysis space as currently there is no real community for people interested in or working in those fields to get together and discuss things that they are seeing/doing.
Our very own Ben also ran one of the Forensic Challenges, FC10 - Attack Visualisation !
How to interact with our chapter
We are able to be contacted via [email protected] as well as via our discussion list on [email protected]
Last year we grew our chapter by 3 people and in the coming year we will be looking for more people that are interested in getting involved more closely with the project.
For the coming year we will be focusing on creating and fostering a better community for Incident responder/Malware researchers here in Australia through collaborative discussion and botnet proliferation monitoring.
All members of the Australian chapter are regularly chatting away on the mailing lists and IRC :)
What is Trigona?
Trigona is a VirtualBox powered honey-client that was designed for high throughput with low False Positive and low False Negative rates.
It is essentially taking the best of High interaction and Low interaction honey-clients and cobbling them together with a couple of Perl scripts.
The benefits of High Interaction honey-client's has been that since there is no emulation of software etc. you can catch everything as opposed to a low interaction honey-client where exploits will only be caught if they have been catered for. However the down side of the High Interaction honey-client is that it is a lot slower than a Low Interaction as it requires a full blown virtual machine for each URL analysed as opposed to generally a command-line tool that can pump through a lot of links in a short period of time.
Trigona takes the high throughput of LI honey-clients and the 'catch all' benefits of the HI honey-clients and puts it into one system.
essentially it works like this:
1) load a virtual machine with all the required browser plug-ins etc etc.
2) instead of loading 1 URL for the virtual machine we load 200 for example all at once. this network traffic is packet captured for analysis at a later stage.
3) revert image and repeat.
by doing this we can achieve very high throughput but miss nothing while performing the analysis of the pcap 'out of band'.
While this is very useful it is only part of the solution.
What do we do with this pcap now?
Well the ultimate aim for a honey-client tool is to find the following:
- Infected/hacked website, i.e. mumndadsbakery.com
- Exploit Kit
- Malware Binary
In a traditional sense this was very simple as only 1 URL would be analysed at a time and if there was a binary dropped then it was safe to assume that the first URL was infected and other content pulled was related to the exploit/binary also. Case solved.
What about when you have 1..n start links 1..n intermediary links and 1..n final links???
You can start to see the problem. How do I know which link is related to which? etc etc.
For this stage the process is rather (kinda) simple.
out of the packet capture, using HTTP::Sessionizer,
the pcap analysis component of the tool takes all URLS visited (and other data) and loads them into a database with the following information:
mysql> desc map;
| Field | Type
| stem | varchar(1000)
| url | varchar(1000)
| hostname | varchar(1000)
| referrer | varchar(1000)
| exe_flag | int(11)
| start_flag | int(11)
| md5 | varchar(32)
As it loads them in it will ascertain whether or not a file is of executable type or not or if it is a start_flag, this is determined by the honey-client site visit list which is tied to the pcap, (while useful is not necessary for the tool to operate but helps with accuracy)
Once it has identified executable content + associated link it will take that link and start the following:
1) take link referrer and hop back until start link or no referrer.
2) IF no referrer check to see if other links on the same hostname, IF so group those into the case as well.
It will create zip case files such as:
(taken from the Honeynet project challenge)
and also put the executables identified into a separate folder for you to then feed into whatever system you wish to use.
Set and forget :)
There are a number of improvements that I'm currently working on for the Packet Capture Analysis component but if I waited to release when these were all done you'd be reading this in a years time hehe.
- Addition of regex's to aid in detection of exploit kits, as this tool will only identify them IF a successful binary is dropped
- Addition of extra detection methods to identify the malicious drop to then start the link hopping.
-- i.e. kit specific content i.e. Phoenix, Mpack, other cool named exploit kits etc etc.
-- Anti-virus scan
-- Snort like signatures to detect drops
-- ability to identify XOR'd drops designed to evade such network detection on the fly. i.e. exploit downloads data where the MZ header is obfuscated to evade detection
- self learning, IF an exploit site is identified it is saved in the database and used to flag malicious chains/infected urls/drops instead of finding the executable drop.
i.e. hacked_site --> exploit site --> binary drop
currently will only work based on the binary drop and then walk backwards.
next iteration will flag a known exploit site then walk both ways to find the hacked_site and the drop. :)
any other suggestions?
Where can I find the tool?
Recently on the train to work I got to playing with VirtualBox and by the end of the trip I had a very nice new toy that will automagically process malware samples in VirtualBox images and capture their associated network traffic and package up the results into neat little zips foreach sample that is run.
Why did I call it Minionz? well I wanted a cool name and one of the team members said "it has to have the letter z in there" and I figured Minionz was a very appropriate name for a sandnet since they are effectively doing all the leg work as a normal minion would do :)
Today we are happy to announce the release of an automated spam processing tool.
It will extract out all urls from an email, try to pick the correct sender of the email and then link the two together in a database.
Information such as geolocation and ASN is also collected and stored for both the sender and the url.
It has been working now for a few weeks but I'm sure someone will find something wrong with it, if so please let us know.
The code can be found in the tools section http://honeynet.org.au/?q=node/10
Today we are very proud to announce that labyrinthdata.net.au has donated a server to the Australian Honeynet Project.
If you are looking for good Australian VPS hosting take a look at these guys!
The latest version of the Tracking system has finally been released. The changes to version 1.0 are not too great.
This version allows us to decide which hostnames we want to track.
Current Development of the DonkeyPot is progressing along at different speeds lately.
I've been looking at many different coding examples from the OCAML code for mldonkey to a lovely little c# app as I decided that trying to do it all in perl might not be the best idea.
The first version of the code should be ready for release over the next few months.
Some interesting finds so far from testing has shown that the majority of the types of files that have malware, as deduced by virustotal.com, are for anti-virus 'patches'. At last check there were over 750k unique files (by md4) being shared on the edonkey network.
From the files that I have tested so far I am already seeing some links between existing botnets that are in circulation and those shadowserver are tracking. This raises some very interesting questions about the propagation methods of these botnets.
Another interesting point is that for the majority of the files that I have collected/tested there is very minimal coverage from the leading anti-virus vendors.
Today we are happy to announce the public release of the first version of our Fast-Flux tracking tool. It can be found in the tools section of the website http://honeynet.org.au/?q=node/10
Over the last couple of weeks, what started out as a simple perl script to map the IP's of the fast flux domain ibank-halifax.com has since turned into a complete system, with a database backend for tracking changes to FF domains.
Shortly, I hope to soon release the implementation so that other people can learn from and build on what I've done.
Map of Fast flux hosts:
Our first sponsor!!
We are very excited to announce that 1and1.com http://1and1.com have donated a server to the project !
This is very generous of them, and is something we could not have afforded out of our own pockets.
Thanks very much 1and1 !