Friday, December 18, 2009

20 years in av

i knew time was coming, i even thought about marking the occasion here, but my attention was elsewhere at the time so there weren't any posts on the subject.

then eddy willems posted about his 20th anniversary in anti-malware and i thought maybe i should post about this after all - so it entered my to-do list and languished there for a while.

and then today david harley posted about his upcoming 20th anniversary in anti-malware so i figure it's time to pop that task off the to-do stack and actually write something.

you see, i also had my 20th year in the anti-malware field this year - though not in the professional sense that eddy and david did. i don't recall the exact date but i know it was in late november of 1989 that i started down the path i'm on today (i gave a basic run down of how it happened in my about me post 3 years ago).

it's interesting to me to discover that some of the other names i know in this field also got started the same year. it didn't really seem like '89 was all that significant a year or anything, but i guess that's about when awareness of the malware problem (or rather the virus problem at the time since that was the principle form of malware being created) was reaching the critical mass necessary to entrench itself firmly in the general public's consciousness - first as an obscure curiosity, but as an increasingly real and oftentimes personal annoyance for people as they had their own run-ins with the problem. as such i'm sure there's quite a few more who got their start that year.

at any rate, happy anniversary eddy and david. i hope it's been as stimulating for you as it has been for me.

Saturday, December 12, 2009

why mac fanatics still believe they're virus free

(another post form the draft pile)

i stumbled across this article about why macs are still virus free and it occurred to me that there were a number of false premises that deserved highlighting to illustrate why mac users still think their beloved platform is so safe.

  • the first thing i noticed was an ill-conceived notions of what a virus is (eg. "When I say virus I'm referring to a program which self-propagates and self-installs either by exploiting a back door in the operating system or another legitimate application"). by this definition most PC viruses (and i'm not using virus as a catch-all umbrella term here) are not actually viruses.
  • next thing i noticed was the comparison of apples to fruit (eg. "So why don't Macs get viruses while Windows PC's seem to be facing a constant tsunami of malware, spyware, worms, trojans and botnets?"). compare mac viruses to pc viruses please, not mac viruses to pc viruses, worms, spyware, trojans, botnets, etc. either that or compare the gamut of mac malware to the gamut of pc malware.
  • next on the list of wrong-headed thinking i picked up from that post was thinking malware authors are still just attention seekers (eg. " There are a lot of theories regarding install base and attention-seeking virus writers") when it has been demonstrated over and over again for the past several years that they're financially motivated now - the current trend is to follow the money.
  • another bit of nonsense i noticed (which in fairness is bandied around by a lot of otherwise intelligent people) is thinking that going after the biggest group limits them to going after just one group (eg. "wanting to target the biggest market") when it has been demonstrated that professional malware gangs are targeting both platforms at the same time (see zlob gang).
  • yet another wrong thought in the article was thinking that unix makes the difference (eg. "The real answer is UNIX") when in fact the first academic treatment of the virus problem (back when the term 'computer virus' was originally coined) had viruses successfully replicating across a user population in a professionally administered unix environment without cooperation from the admin.
  • the most damning, however, is thinking in yesterdays terms. the very fact that they're still focusing on viruses rather than malware in general shows just how outdated the thinking really is. most of the malware currently attacking pc's these days is NOT viral (either by normal pc definitions, incorrect mac definitions, or formal definitions). furthermore viral malware isn't really the biggest malware problem these days. huge numbers of non-viral malware are the biggest problem facing pc's and the malware gangs have been targeting both pc's and macs for years now.

mac users have largely ignored the malware problem, which is probably why what little they know of the problem is generally either wrong or out of date. the malware problem isn't ignoring them, however. they have an opportunity to get ahead of the problem, but if they keep living in the past that opportunity will be squandered.

Sunday, December 06, 2009

sneakemail is no longer free

well, y'know what they say, all good things must come to an end and the free ride at appears to be one of those things. as of sometime earlier this month moved to a paid service and existing accounts were switched over to the one month trial setup.

if you're using sneakemail then this is probably something you want to know about (i found out quite by accident) because when the trial is over your emails won't get forwarded to your real email address anymore.

i've been using sneakemail for years now, and directing others their way. it's a great service and it's helped me keep spam in check so i don't want to say that their service isn't worth the $2 a month fee, but recurring charges are the bane of my existence so i'm not sure what i'm going to do. this is complicated by the fact that i have so many addresses with them (most of which get no traffic, but still). switching to another service would be a pain due to a several years long habit of using sneakemail as well as all the existing addresses i'd have to switch over. plus there's no guarantee that the next one will turn out any better in the long run. paying the fee would also be a pain, and an ongoing one at that.

but enough of my griping - you're now forewarned, go do something about your account if you're a sneakemail user. you have less than 30 days.

malware classification fail

here's one from the drafts pile, hopefully it's not too stale

i'm wondering what the anti-malware world is coming to when the leading vendor classifies something as a trojan even though it clearly discloses what damage it does.

by this logic, every copy of every operating system also ships with a trojan horse program, either in the form of the delete command or the format command.

one of the basic requirements of a trojan is that it tricks the user into executing it - the original trojan horse wouldn't have gotten very far if there was a warning sign on the outside that said it contained enemy soldiers that would sack the city when night fell. so too would suspected malware not get very far if it plainly disclosed what it does.

this game is at worst a potentially unwanted program - in other words, grayware. we can't just go around calling every bad program (or even just every bad non-viral program) a trojan anymore than we can go around calling all malware viruses. not using the proper terminology is a great way to confuse everyone and confusion is something we don't want to sow, right?!?

Friday, November 27, 2009

av vendors are not like drug pushers

one of the erroneous ideas i sometimes come across is that av vendors are a little like drug pushers - that they want to keep you the user addicted or otherwise dependent on signature updates because charging you for regular signature updates is the only way they can make money.

this notion is complete, uninformed bullshit.

the first problem with this idea is the money aspect - if you haven't noticed, the major av vendors come out with a new version of their products (not just new signature updates) every year, not unlike microsoft comes out with a new version of ms office every few years. you have to pay microsoft to upgrade your ms office installation so it shouldn't take a rocket scientist to realize that av vendors make money the same way. they also make money from those who just renew at the end of the year instead of buying the new version because the signature and engine updates cost money to develop.

now you might think that just plays into a more fundamental issue, that they're purposefully adhering to a technology that requires updates/upgrades so that you need to pay each year but that's also nonsense. both the threat landscape and the operating environment itself are constantly changing, there's no protective technology that won't require updating to accommodate that fact. furthermore, there are always improvements that can be made to the way a security product (any security product) does it's job - the only way to get those improvements out to people is in the form of updates/upgrades, and the only way to pay for the research and development behind those improvements is to charge somebody money and it's only fair that the people they charge for the improvements are the people who benefit from those improvements.

still think they're intentionally dragging their feet with regards to non-signature-based technologies for some reason? fine, lets look at our old friend thunderbyte anti-virus. thunderbyte was an anti-virus suite back in the early 90's before av suites were even heard of. it had the signature based scanner, sure, but it also had the most transparent heuristic engine (by which i mean it told you what properties a file had that made it suspicious) i'd ever seen (then or since), it had rudimentary application whitelisting, it had behaviour blocking, it had integrity-based generic detection and cleaning. thunderbyte even marketed av hardware. the folks at thunderbyte were pioneers who in a very real sense built a better mouse trap and believe it or not the world did not beat a path to their door. the product was ultimately a failure in the market (their technology was bought by norman data defense which, with all due respect to the folks at norman, is a much more obscure company), not because it wasn't a superior product (it was), nor because it was too much of a niche product (it was readily available in computer stores where i live despite coming from a different continent and i imagine it was available in stores elsewhere as well), but because the market wasn't ready for it. just because you build it doesn't mean they will come - it might work like that in the movies but not in real life. it would be unreasonable to expect other vendors to waste their money developing technology that the market wasn't already clamouring for - the reason vendors have been slow to develop these alternative technologies is because the market for those technologies has been slow to develop. there weren't enough customers demanding the technology for it's development to make good business sense.

Tuesday, November 24, 2009

why are ethics so undervalued?

why are ethics so undervalued? i honestly don't know the answer to that question but i'd like to explore the topic and explain what i mean.

first i'd like to dispel any fears that i'm about to go on at length about people not understanding the difference between right and wrong - i think most people do understand the difference. that said, i don't think most people appreciate the difference - which is to say i don't think it holds much meaning for people, i don't think it's important to them.

i'll give you an example. not too long ago anton chuvakin posted an article on FUD - specifically one that is, if not an outright endorsement of FUD, at the very least an argument that sometimes it's a good thing. i'm not going to pick too much on the notion of endorsing the use of manipulation in the workplace, what interests me in this discussion was something he wrote in response to a blog post criticizing his stance:
personally, I think that “trumping with ethics” is a low card in intellectual arguments! IMHO it is one step above name calling
i don't think there can be any question that this statement represents a remarkably low valuation of the topic of right and wrong.

by way of contrast, i would place ethical right/wrong one step below technical right/wrong - and those of you who know me know how highly i value technical accuracy (hint: i make enemies simply by correcting people).

so where does such a huge difference in values come from? and what does it mean for the security community that anton is not only not an outlier but in all likelihood far closer to the norm than i am. have we become an "ends justify the means" sort of society? is security as a goal something we need to promote at all costs?

i suppose i need to better understand why it means as much as it does to me, so i guess i've got some soul searching ahead of me, but nowhere in that search do i expect to find why it's so much easier for others to put aside. i don't get many comments on my posts (since normally i know the answer to the question i'm asking) but in this case i'm hoping to hear what others think so please feel free to comment.

some new snake oil from kaspersky

i found this out thanks to a thread at wilders - apparently kaspersky is taking a page out of the mcafee snake oil playbook. mcafee has total protection and now kaspersky has total security.

i've been over this time and time again - this kind of branding is snake oil. the obvious implication that the average person would draw is that they simply have to use kaspersky total security and then they can be totally secure. that's a false sense of security and the folks at kaspersky know it.

obviously someone cares more about market share and getting to make commercials with jackie chan than about intellectual honesty.

oh crap - looks like bitdefender did same thing.

being a whitehat means taking sides

you wouldn't think this needs to be said, but apparently it does - being a whitehat means taking sides. more than that, it means taking the side aligned (more or less) with the general public's interests - doing things for their direct or indirect benefit.

and so it is that i always seem to find myself surprised by people who call themselves whitehats but who sacrifice the public's interests for their own agendas. those people are just lying - to others and perhaps even to themselves - about how good of a 'good guy' they really are. these are greyhats at best or, perhaps more likely, blackhats.

one such case that came up recently was that of peter kleissner (another post on the subject here), an ex-employee of the av vendor ikarus software who released proof of concept attack code and then, after being ousted from his position within the av industry, came up with a service to help malware authors evade the av industry.

i suspect mr. kleissner doesn't actually think of himself as a whitehat anymore, even though he would have generally been considered one at the time his descent started. the thing that stands out most to me, however, and the thing i think needs underlining is the following quote:
I won't make a difference between black hats and AV companies. To me it's not good or bad, it's just technology.
which seems to suggest he doesn't care to draw a distinction between good and bad. there's a word for that boys and girs, and that word is amoral. while it is true that he is still quite young, he is 18 and he was part of the av industry for over a year. i'm curious how one at such an impressionable age could manage to be part of the av industry and still manage to avoid having his moral compass align with that industry and community.

i'm still here

i know it's been a while - i'm still alive, just preoccupied with other things. i'm going to try to clear out some of the backlog of things i intended to write about. expect some old subjects for the next little while.

Wednesday, October 14, 2009's wall of shame was dead-on...

... and you should all be ashamed of yourselves for being caught on it.

for those missing the background, last week's sector security conference had this thing called the wall of shame. it was information gathered by sniffing the network. a lot of people thought it was gathered by sniffing the wireless network but it was actually gathered by sniffing the wired network. they got all in a huff because they thought by using the secure wireless option they'd actually be secure.

are you face palming yet? yeah, securing your wireless connection to a network doesn't secure your use of that network. this is a network none of those people controlled - it's about as secure as a public access terminal in a cybercafe and still they thought it was safe? these are security pros no less, at a security conference.

this is pretty unbelievable to me, that security pros can't keep their own shit secure at a security conference. no wonder security appears to be so hard and we have so many breaches - you folks aren't paranoid enough! you absolutely belong on a wall of shame if you thought you could use some strange networking service and just naturally be secure. use an encrypted tunnel to a proxy on a network you control for crying out loud, or better still just don't use the network at all.

i didn't even bring a laptop (or any electronics device except for a cheap mp3 player) and i managed to enjoy the conference without incident. i could say the reason i didn't bring any connected devices was because i've heard of shenanigans like this at security conferences in the past (as should you all have), but the truth is i just like to travel light.

it both scares and saddens me, though, to think that some of my data might actually rest in the hands of some of these people. frankly i think we need a version of the darwin awards for security and you folks on the wall of shame are all contenders. i can't decide, however, whether it should be called the shannon awards or the kerchoff awards.

finally, while i realize there are legitimate concerns about the legality of how the wall of shame was implemented, i would also argue that if you think the law is going to solve your network security problems then you might be a security idiot. the law is a deterrent, but as preventative controls go it's not particularly reliable.

Thursday, October 08, 2009

what is credentialed malware?

credentialed malware is essentially (and perhaps more aptly described as) multi-user malware. not multi-victim malware, mind you, but multi-attacker - it is designed to be used by multiple attackers with differing levels of access to the malware's collection of functionality.

credentialed malware really only makes sense in the context of a criminal organization where different members of that organization have different roles and different levels of trust.

it also only make sense (from a tactical perspective) in cases where attackers would need to physically access the compromised machine(s) (ie. a public kiosk) in order to pull of a successful attack. if the machine could be accessed remotely or if the machine could send data out to remote destinations then there would be no need to employ multiple human agents to mask the maneuvers required to make the attack work.

back to index

(thanks go to nicholas percoco and jibran ilyas for introducing me to this concept)

Wednesday, October 07, 2009

my sector '09 experience

last year i was lucky enough to get my employers to send me to the sector conference (the second one ever) and this year that luck continued. just as i did last year, here is a description of my experiences at sector '09.

first a note, perhaps a reminder to myself, who knows - but if you're going to attend a conference that, logically requires you to get out of bed at 6:30am in order to do what you need to do in the morning and make it there, you might want to go to bed earlier than 1:30am. people don't want to see you yawning during their talks, or when they're talking to you directly in the halls (or whatever). it makes them think they're boring you, even if they aren't.

the conference started off with a great keynote by chris hoff about the cloud - check that, about cloud computing because there is no "the cloud" according to chris; though the fact that it is clearly illustrated on many network architecture diagrams (representing everything else) seems to contradict him. however, and the fact that this became clear to me as a result of his keynote is one of the things that made it great, that rudimentary abstraction on old-school network architecture diagrams has little to do with the discussion of cloud computing. now i wish i'd seen his "4 horsemen of the virtualization apocalypse" talk last year.

next up was the first session of the day and this year, like last year, i spent it in kevvie fowler's talk - this year it was about catching sql injection by examining the sql cache. again, like last year, my decision to attend this talk was based on the perception that doing so would allow me to bring value back to my employers (who paid for my admission) and kevvie didn't disappoint.

following that was the lunch keynote given by andrew nash of paypal, talking about consumer identity. there didn't seem to be a lot of information there that i could use directly, either at work or at home, but some of his ideas/opinions seemed spot on. one of the concepts i don't like, however, (and i believe i've posted my complaints before) is something that i now know is called federated identity.

after that i attended roy firestein's talk about crimeware and web exploitation kits. aside from the fact that roy is one of those people who says anti-virus is useless (there seems to be one in every crowd, but if the sentiment were true then one has to wonder why malware writers continue to waste their time, energy, and money on developing innovative defenses from anti-virus) the talk was fairly interesting. one thing that struck me though (before the av is useless comment) was that roy (and others when i sit down and think about it) seem to focus more on and distinguish between what seem to me to be subtle distinctions between similar pieces of malware. i'm not sure why but those distinctions have started being less interesting to me these days. not that that stopped the talk from being interesting, mind you, that was just a thought that popped into my head while listening. i think i'd have more difficulty fleshing out a talk due to this mindset, were i to ever be in the position of trying to give one.

for the third session of the day i had decided to attend chris boyd's talk about security and gaming consoles. despite the fact that i don't own a gaming console myself (my gaming console experience is limited to a pong system, the colecovision, and the intellivision systems) and there isn't one at work, there were 2 reasons i wanted to attend this talk. the first was that chris is someone i've known online for a while now, and the second is that while this specific attack vector is outside my area of familiarity my suspicion is that the significance of this vector will increase in the future. the talk was quite interesting - some things were familiar, some i've seen analogs for in social gaming, others were new. the apparent cross-pollination of attack strategies is probably the most interesting thing to me because cross-pollination is not a unidirectional process and so i expect that some of the attack strategies that have been more or less peculiar to consoles so far will find their way out of the (thinly) walled garden of the console world.

as an aside, i also planned on introducing myself to chris after his talk but he had to go and recognize me beforehand. how, i don't know, since there are few photos of me online, fewer still that are current, and then of course there was my clark kent disguise (glasses, when i normally wear contact lenses). clearly, superman i ain't - but there's certainly nothing wrong with putting a face to a familiar name so i'm not complaining.

the last session i attended the first day was robert hansen's talk on information warfare and the future. as the talk was very much about the future, and as i don't actually put much stock in predictions i'll take the stance i always take in this context and wait and see. some of the descriptions of upcoming capabilities were quite provocative, however. the talk let out about 35 minutes early, so it was probably the shortest i saw while there.

letting out early at the end of the day can be a mixed blessing - for those who just wanted to go home they could get an early start, but i wanted to go to the reception at joe badalis which wasn't supposed to start until the last session was scheduled to finish so i tried to find something to do with the spare 35 minutes. that would have been easier if the vendors hadn't mostly already packed up for the day - it would have been the perfect opportunity for me to visit the booths since there was actual time (something that's harder to find during the day). eventually i just decided to go to the reception early (as apparently a number of others in the same boat already had). i had a good time there, talked to a few people, got a few business cards but unfortunately when i left the office on monday i had forgotten about sector so i didn't grab a handful of cards and thus had nothing to give in return. i also found out that apparently my day job is more unusual and interesting to other people than i ever realized - who knew?

after the reception was the speaker's dinner which i'm afraid i had to miss due to never quite figuring out where i was supposed to buy the $65 ticket, and a tweetup following that which i also missed since i doubted i could find something to do for the 2 1/2 hours between the end of the reception and start of the tweetup. apparently this worked out for the best as i was able to avoid seeing chris hoff give brian bourne (one of the organizers) a lap dance (or man-dance as i think i saw it called). yes, you read that right. the stills posted to twitter were bad enough, i can only imagine how scarred the people who saw it live must be.

the second day i attended (technically the 3rd day of sector, but i don't attend the first day because it's just training and their courses never seem to have enough relevance to me to justify the cost) started with a surprise. nicholas percoco and jibran ilyas' talk entitled "Malware Freakshow" was excellent. it did something that is actually exceptionally rare these days - it introduced me to a new malware classification which by itself is actually pretty rare, but unlike a lot of the more recent 'new' malware classifications i've heard recently this one actually sounded like a justifiable classification rather than a mashup of existing capabilities in a new package. credentialed malware, or malware designed to be used my multiple people with differing roles and privileges within a criminal organization is very much a sign of the times - computer related criminal enterprises have progressed to such a degree that malware actually comes in a multi-user flavour now and different users get different capabilities. that was quite neat and that alone would have made this talk my favourite, but there was more: all the real-life examples being used were the sorts of organizations that i could envision being customers of the company i work for (in fact it wouldn't surprise me if some of them were customers) - it was like worlds colliding (there's usually not much overlap between my day job and what i blog about) and i can't wait to share some of the stories with the guys at work tomorrow - especially since a procedural control that our product facilitates potentially could have thwarted the credentialed malware example.

following that talk i attended jerry mangiarelli's talk on sql injection - yes, a second talk on sql injection. again this is a relevance to the day-job sort of deal but it was good to hear some more about it, about the scale of the problem and that sort of thing. of course, considering how prevalent sql injection is now it's actually shouldn't be a surprise that there would be multiple talks on it or that someone would attend both.

then we had the lunch keynote for that day which was with adam laurie (aka major malfunction). it was quite a fun presentation as, just like adam, i like to break things too (especially at work, though i don't get to do it as much as i used to). he talked about breaking a number of things (like breaking into a state of the art hotel room safe with a pair of pliers and a screw driver), and he also talked a great deal about biometric passports. i didn't care that much for his treatment of biometrics, but having worked in the field (in an integration capacity) my views and populist views aren't likely going to match up.

after lunch i attended the panel discussion with tyler reguly, mike zusman, jay graver, and robert hansen (yes, robert hansen again - that wasn't in the programme). is something i've been hearing about for a while and wanted to know what all the hubbub was about and the panel did a pretty good job of raising my awareness of a number of issues (which was the goal they stated at multiple points throughout the discussion). one of the points i think was a red herring, however. the complaint about changes to the user experience over different versions of the browser is predicated on the idea that the ssl indicators are useful to ordinary people (since us technical folks are better able to adapt to such things). as has been covered in the past, however, at a fundamental level we just aren't wired to notice when something like a little lock icon is missing. that isn't a failure of ssl, it's a failure of the very concept of a safe-site indicator.

for the last talk of the day i chose to sit in on nick owen's discussion on approaching secure online banking. he's someone whom i recalled having a brief discussion with about authentication in the comments here at one point and i was interested to hear what he had to say. i was impressed to see that wasn't just saying X solves our problems, that he'd actually identified the different countermeasures appropriate to the different compromise techniques, etc. the banking industry specific stuff, i must admit, was way way over my head, however.

then things wound down and folks made their way to the keynote area for the final wrap-up. i said a brief hello to chris hoff, which seems to be a pattern now (note to self for next year: when it comes to con-tag, i'm it again), as well as introduce myself to tyler reguly briefly just as we were all getting ready to leave.

but anyways, it was great, i learned lots, met some great people, and had fun. hopefully i have the opportunity to do it again next year.

Sunday, October 04, 2009

mcafee and malware creation

if you hadn't already heard, mcafee plans to teach a class on "malware experience" at a 4 day security conference they're holding this coming week. there were only a couple of reactions to it that i saw, notably david harley's post at threatblog and michael st. neitzel's post on the sunbelt blog. the sunbelt post in particular drew the attention of mcafee's dave marcus who clarified exactly what was going to be going on - to the extent that the controversy around the promise of showing attendees how to create new malware seems to have died a quiet death.

i could have weighed in when i first read about this but the wheels of change had already started to turn and i wanted to see where things went before i said anything. the end result, however, seems to be mcafee has placated people's concerns with hollow promises that instead of teaching people how to make malware from scratch, they'll instead be using an existing toolkit to create the malware. the implication is that since this toolkit produces malware that is already detectable (at least as far as mcafee's product goes) then they aren't really contributing to the malware problem. if you're detecting the distinct aroma of a barnyard right now, you're not alone.

there are a couple of problems here so lets go through them one at a time. the first is the simple fact that mcafee is in the anti-malware business. i've said this before and i'll say this again - if you're anti-X you shouldn't go around making X's and you sure as hell shouldn't encourage others to do so. the company's namesake reputedly got into trouble with the rest of the industry by offering such encouragement in the form of financial incentives (paying for new viruses). now in this new case it's all going to be done inside a closed environment to prevent undesirable consequences so there should be no problems, right?

wrong. the work in the classroom will take place in a closed environment, but i have no doubt that some of the attendees will subsequently play the home version of the game, running malware toolkits on their own environments and creating malware in less secured environments (you can't really believe that they'll learn everything they need to to handle malware safely in those 4 hours the class will run). a class like this encourages precisely this behaviour. it makes it seem ok for less experienced people to handle malware, and to that end even people who never attended the class will also play the home game if such behaviour is endorsed.

think that sounds far-fetched? it isn't, there are already well intentioned but ultimately unqualified people playing with malware and inadvertently contributing to the malware problem. it's been going on for years. sarah gordon covered this in her paper "The Generic Virus Writer II". that's a pill that the technologically inclined don't want to swallow, they think they understand malware well enough to prevent unintended consequences, but the reality is that most people lack the wisdom to appreciate the extent of their own ignorance.

finally, given the probable result of people playing the home game with the same malware toolkit used in the class, should they contribute to the malware problem they will do so in a way that benefit's mcafee because their product already detects all the output of the toolkit. they will be breeding demand for their product in an absolutely unethical way - by teaching people just enough to cause problems that their product can fix (others may as well, but it's impossible to know at this point).

mcafee is behaving irresponsibly and unethically, and i'm struck by how things seem to have gone full circle with them. while others seem to have let them off the hook because they're using a toolkit instead of teaching how to create malware from scratch, as far as i'm concerned the only difference is the sophistication of the malware creators they are going to produce. mcafee will be teaching a new breed of script kiddie and tarnishing the industry's reputation once again. congratulations on being part of the problem, mcafee folks.

Tuesday, September 01, 2009


ah, camping... there's nothing quite like getting away from it all to put things in perspective. not that i sat around the campsite the whole time thinking about malware (gosh that would have been a lame camping trip if i did), but there were times when there was nothing but me, my thoughts, and the patter of rain pouring on my tent so those thoughts of mine wandered around a bit and i realized that some people (especially the people i interact with on blogs or on twitter) may not always get where i'm coming from with this blog.

i've said on more than one occasion that i am not an expert - fair enough. it also happens that i am not a security professional. i'm not the guy who protects you from yourself and everything else while you're at work (thank goodness i don't have to put up with some of the crazy shit other people pull). i'm not a security market analyst (again, thank goodness, i can't stand marketing - it's mostly just a pack of lies). i'm not a malware analyst either (who knows, i might have been good at it if it were a skill i ever bothered to foster, but i didn't).

i don't spend my time looking for bad stuff on-line and marveling over how easy it is for the average person to stumble across it. quite the opposite in fact, i'm far more interested in how easy (and in my experience it is fairly easy) to avoid bad stuff online. i play games, watch videos, read my rss feeds, etc. rather than tracking bad guys or pulling apart attacks. i do what normal users do because, more than anything else, i am a user.

and that is perhaps the main reason i believe in users more than many security folks seem to - because i'm one of them. i'm what other users could be with the proper motivation. the proper motivation isn't even an unreasonable expectation - it's merely the desire to protect oneself (a fairly natural thing) tempered with the understanding that the problem will always be there. it has nothing to do with some perverse interest in the dark arts, or some intense curiosity about how various threats work (otherwise i probably would have tried my hand at malware analysis). my primary interest is in protecting myself and long ago (not long after i started learning how to do that) i realized that other people would be better off if they more or less did things the way i did them. after all, i've never, in all my years, been the unwitting victim of malware.

now you could say that's because i'm a programmer (though that certainly doesn't seem to have helped other programmers and there really isn't any programming involved in avoiding threats) or because i'm a computer scientist (ditto for the comp.sci.'s). maybe it's because i have specialized knowledge (i didn't always have it) or maybe i've just been exceptionally lucky for the past 20 years or so (if you think that then you obviously don't know me very well). as the experts will tell you the malware landscape has evolved over the years - and the fact of the matter is, so have i, and so can others.

i'm not a fanboy or a zealot, i'm a security user - i use security, it's concepts, it's techniques, it's tools, etc. to protect myself. i use what works in the process of using the computer to do things like work or play. i operate in a different way than the average user but my use of security does not detract from my ability to get a job done or enjoy online entertainment.

and so to all those people who say av is dead because it doesn't live up to the promise of marketing's lies, or those who say security needs to compromise because users don't want to change the way they operate, or anyone else who thinks we need to tear down progress we've already made (as opposed to building on it) because it's not good enough - i say get over it you maladaptive dinosaurs.

Monday, August 10, 2009

polypack isn't quite as bad as i had thought

just a quick update / mea culpa.

although i stand by the general sentiment expressed in my previous post about research not always being victimless, i've finally gotten a chance to look at the specific example of the polypack service (i was unable to before because the site was down and i had to go by what was written about it rather than what was actually on the site).

i don't know if this is a change from how things were previously, but the polypack service is currently not open to the public. that's great news. although it's still possible that some among the select few who do gain access will be untrustworthy, at least it's not a free-for-all, the people behind it did put some thought into the potential consequences - something that's all too rare these days.

it would still be better if they weren't creating new malware at all (why not pack the eicar standard anti-malware test file instead?), but i felt obliged to at least give them credit for not being completely naive about openness.

Saturday, August 08, 2009

research isn't always victimless

i read david maynor's negative reaction to an article by roel schouwenberg on the ethics of a particular instance of security research dubbed 'polypack' and, well, i'll be honest - my initial reaction was that someone needed to whack maynor upside the head with a clue-stick.

i didn't run right out and blog about it, mind you. as frustrating as this pervasive lack of understanding is for me (maynor's opinions echo those of other security pundits i've heard countless times before), standing up on a soapbox and declaring 'this is wrong' has not been a particular persuasive course of action.

this has been particularly frustrating to me because these issues are so clear-cut to me and i've been unable to understand what other people's problems are that they can't see it as clearly as i do (or as roel seems to for that matter). to me there is clearly a significant ethical dimension to things such as online services that turn known malware into unknown malware. that said, i'm not the kind of person to shy away from re-examining my own basic assumptions and one of the biggest assumptions i make about other people is that they are, more or less, just like me - they know the same things i know, they think about the same things i think about, they have an awareness of and appreciation for the same things i do. i know, i know, i assume way too much - while it's a nice theory to think that we are more similar than different, there are obviously more differences than i tend to account for. it's difficult to account for those differences since i only really know how i work, not how others do - maybe if i could read minds it would be different.

so, throwing out that assumption i recalled a comment david harley left on a post i made about fred cohen not too long ago. he said he suspected fred didn't really spend all that much time thinking about viruses anymore. it stands to reason that most people, even most security folks (so long as they're outside of the anti-malware subdomain of security) don't actually spend all that much time thinking about malware issues. specifically, i suspect that they don't really spend that much time thinking about the potential consequences of efforts like the polypack project, or the race2zero, or any number of other things i could name. why should they? the consequences are so far removed from them in their ivory research towers (not to mention their lack of focus on the very domain where those consequences would manifest themselves) that they're unlikely to learn of any consequences should they become real.

that's not always the way it goes, of course. look at bootroot, created by derek soeder and ryan permeh of eEye digital security. that was the basis of the mebroot mbr malware that became so well known and prevalent in the wild. the consequences of their actions are really quite plain to see - they armed the bad guys and the bad guys pulled the trigger. they are at least partially responsible for the damages done by this malware - and i say this not as some holier-than-thou security preacher, but rather as someone who has himself quite possibly unwittingly aided the bad guys who made conficker (though i'd like to think it wasn't quite as predictable as the bootroot case). there are real consequences to your words and actions in this field that i think people remain largely ignorant of and i think that if people were more aware of this and had a better appreciation for this then they'd likely see things through the same sort of ethical lens that i do.

i know of one well documented case where this awareness was arrived at a little too late. mike ellison, once known as stormbringer (among other names), was a virus writer in the vx group phalcon/skism who published an open letter in alt.comp.virus in 1994 announcing his retirement from the virus writing scene and recounting his encounter with a victim of his efforts.(from google's archive)
For those of you who know me, you may know my various handles, my activities, and causes for what little fame I may possess... I am Stormbringer of Phalcon/SKISM, Black Wolf, and Jesus of the Trinity, and even some others probably no one would recognize. And today, I am retiring.

Last night, I got email from a guy in Singapore.... a nice guy, really, extraordinarily polite for the circumstances.... he had been infected with one of my viruses, Key Kapture 2, by some guy who wanted to fuck with his computer. It was filling up his drive (he had only 2 megs left) with captured keystrokes, and he had no way to disinfect it, so he wrote me, asking me for help to cure one of my own creations that had attacked his computer... I called him up voice, and talked with him.... and even then he was kind, almost like he didn't really blame me, although I feel he should.
Now, I never released my viruses against the public myself, never wanted them to be in the wild, but it happened. Fortunately, I also never wrote destructive viruses, so I didn't trash the poor guy's computer. It will be fixed, with the main harm being his time and security.

For some reason, this really shook me. I had always written my viruses as educational, research programs for people to learn more from, and for myself to explore my computer and a type of program that I found almost obsessively interesting. All of my viruses were written to explore something new that I had learned, something different, something cool..... and yet, I still managed to hurt someone...

A lot of you people are probably thinking that I'm a wuss, or whatever, and I really could care less.... fucking up people's stuff was never my intention, and yet it happened. I have decided that it is time for me to quit writing viruses, and continue on with something more productive, perhaps even benificial to others.

Don't really know what else to say, 'cept that it was an interesting journey..... and I'll still be hanging around somewhere on the nets.....

Stormbringer, Phalcon/Skism
Ex-Virus Writer
here we have someone who you and i might have considered a bad guy but who didn't consider himself a bad guy. someone who was exploring and researching with computer viruses and when he finally became aware of the consequences his actions had had, when he discovered how real those consequences were, that 'research' came to an abrupt end.

without that awareness, though, it's not so much a matter of ethics for people because they aren't consciously choosing an obviously evil path. saying that they lack ethics is a little bit like calling someone a liar when they don't know what they're saying is false. i'm as guilty of this as anyone, perhaps even more so (just look at how often i've used the ethics tag on this blog). those of us who are aware and appreciative of the consequences refer to it as a matter of ethics because we see it that way, because for us it would be a matter of ethics and we (perhaps incorrectly) think that others should be able to see the ethical dimension as well, should be aware of the consequences of their actions, especially with the not so subtle hint that we think someones ethics are lacking.

hinting isn't working though so let's try some thought exercises instead. for the first exercise i want to you imagine that you've discovered a new way to attack something. now let's say you publish your findings and you put a demonstration up on the web so that people can play with and build upon your creation (for some of you this might be easier to imagine as you've actually done this). and then let's say someone did just that, they built on your work but they did so with malicious intent. now i want you to imagine having a conversation with an ex small business owner whose company went belly up because of the various costs of downtime due to malware based on your designs. do you express regret or remorse? do you feel for this person? how about the mother whose lost the photographic memories of her child because malware you helped create wiped all her data - how would that make you feel? or then there's the retiree who now supplements his diet with pet food because his investments were wiped out with the aid of tools you inadvertently helped create and his pension isn't really enough on it's own to make ends meet. how do you think it would feel to know that you had darkened people's lives?

these are not impossible or even improbable outcomes, but i can imagine that there would still be some reluctance to accept this view. it's very natural - it's called denial. so for the second thought exercise i'm going to stay away from the hypothetical and instead deal with this historical. specifically your history, gentle reader. i want you to remember something. i want you to think back to a point in your past where you hurt someone unintentionally. maybe it was physical, maybe emotional, maybe something else but regardless of the type of hurt it should be something where you really didn't think anything would go wrong. i want you to try and remember it as clearly as possible. remember how easily it seemed to just happen, and remember how hurt the person was. finally, i want you to try and remember how you felt when you realized it was your fault, that it could have been avoided if only you'd done things differently, if only you'd thought about the consequences of your actions. i want you to remember what it was like both before and after you gained that awareness and compare it to how things are for you now - are you really aware of the consequences of your or other people's actions with respect to malware now?

most people should have at least one point in their lives like this that they can think back to. if you don't then you're either some kind of saint, or you lack the self-awareness to realize what an asshole you've been all your life. awareness is a funny thing, though. those of us who have it usually take it for granted, and those of us who don't rarely know it's missing. i'll probably still call it a lack of ethics in the future when people either create or do things that help others create malware, but maybe i'll be more conscious of the possibility that they simply know not what they do, and maybe those who still don't see the ethical dimension to aiding the bad guys will at least have a better idea of where people like me are coming from when we get on their case about it. research or proving a point seem like hollow justifications when victims enter the picture.

(edited: corrected mike ellison's name - thanks graham)

Saturday, July 25, 2009

all things come to an end - even endings

not too long ago i published a post called understanding malware growth in which i observed that the period of accelerated malware growth we had seen in previous years appeared to be over. unfortunately, according to new statistics (as reported by mcafee) accelerated malware growth is back.

now one way you could take that is that i was wrong. perhaps i was, at least with regards to being optimistic - perhaps optimism was not called for. i wasn't wrong about the rapid expansion being over, though, the growth was more or less constant for the better part of a year. that is a distinct departure from the growth trend prior to august 2007. a year is a considerable amount of time for malware to languish.

another point i don't think i was wrong about was the nature of malware growth itself. if you read the previously mentioned post carefully you should have come away with the impression that, contrary to the naive malware growth forecasts, malware growth is punctuated. there was very clearly a point at which the rapid expansion of malware growth started, and just as clearly there was a point where that expansion came to an end, a point where whatever driving factor was resulting in the higher malware production rates had finally reached it's peak and the malware ecosystem returned to a kind of balance.

i had in mind the possibility that another disruption in malware growth could occur - if it can happen once it can happen again, but i didn't mention it because i didn't want to be negative, i didn't want to jinx our collective good fortune. had more up-to-date data been available, however, i would have known our good fortune had already come to an end when i wrote that post.

another disruptive event does appear to have occurred. continuing with some of the postulates from the previous post (namely that the mechanism for creating the bulk of the variants is server-side polymorphism), if we were to then go with the postulate that the web is the dominant malware delivery vehicle (as opposed to malware being carried directly in email or some other channel) then one of the key limiting factors in the automated creation of new minor variants is the problem getting victims to view malicious content.

there are 2 things in recent memory that i think could have lead to more browsers loading malicious content than before. one of those things is innovation in the field of social engineering - we've been seeing more fake celebrity news (especially deaths) recently than i seem to recall seeing in the past. hammering on a new hook that the public at large hasn't yet grown wise to could result in an increase in the success of social engineering that used that hook and thus trick more people than usual into browsing to malicious content.

the second thing that could be contributing to an increase in browsing malicious content is the increasing trend of mass website compromises through SQL injection. increasing the number of legitimate sites that serve malware (even if indirectly) increases the chances of web surfers stumbling across malicious content entirely by accident.

both of these things can lead to an increase in the browsing of malicious content, which in turn gives the server-side polymorphic engines responsible for handing out that content the opportunity to create that much more of it in an automated fashion. that said, both of these things will also reach a peak in their influence. the public will eventually grow wise enough to the fake celebrity death reports that the success rate of that ploy will drop back down to previous levels. successful sql injection will also eventually taper off as the content management systems being targeted get hardened by their vendors, or as the pool of potential victims shrinks when the entities in it realize they need to take steps to prevent this sort of compromise.

when these things happen malware growth will once again plateau (it might already be there, but it's far too soon to tell) - until the next disruptive malware innovation, that is.

Monday, July 06, 2009

on the efficacy of passphrases

so now that we've all had a good think about whether masking passwords is really the right way to go, let's question another bit of wisdom - let's question whether passphrases are really as secure as we think they are.

you see with all the thinking out loud recently about the password authentication system one of the topics that got discussed in various sectors was the passphrase. the passphrase idea is that instead of using a relatively short string of bytes (which we'll all hope for the sake of argument isn't a dictionary word) as the shared secret you use to authenticate yourself with some system, you use an actual complete sentence which, although made up of dictionary words is quite a bit longer and thus theoretically harder to guess or attack by brute force.

the theory goes that the more characters are in your shared secret (be it a password or passphrase or whatever) the more possible combinations a program would have to go through before it eventually found the one that matched. additionally, since a passphrase is a proper sentence made out of proper words the consensus seems to be that it would be easier to remember.

but are those things true? let's look at the length issue. one of the things you'll often encounter when selecting a password is the need to make it complex. complex in this context means that it draws characters from as wide an array of the printable ascii character set as possible and that it doesn't contain dictionary words - basically the closer it looks like a random string of characters the better. the reason for this is 2-fold: the wide array of characters is to maximize the number of possible values for each character in the password so that the total number of possible passwords a brute force attack would have to go through would be maximized, and random non-dictionary-word property is to remove any constraints on what the various characters might be that would ultimately lower the total number of possible passwords (example. in english the letter Q is almost always followed by the letter U). natural language introduces considerable constraints - although it takes 7 bits per character to represent the entire set of printable characters, the english language is generally known to contain a little over 3 bits of information per character because of all those constraints. that means that an english passphrase of 20 characters is about as strong as a random password about half that length. at least if that were the only issue involved.

now how is your memory? a while back when i discussed choosing good passwords that you should have a different password for each account you make so that if one gets compromised the rest remain secure. nothing about using passphrases obviates the need to have a distinct and different one for each account so can you remember 20, 50, 100 different sentences? can you remember them all perfectly? can you remember which one goes with which site? perhaps not.

how could an average user make that easier (besides reusing passphrases)? the most obvious tactic would be to use culturally significant phrases. things like:
"The rain in Spain falls mainly in the plains"
"Are you smarter than a fifth grader"
"Rudolph the red-nosed reindeer had a very shiny nose"
i say this is the most obvious tactic because we've already seen it in the media - if you'll recall way back there was a television program called millenium starring lance hendriksen. hendriksen's character was recruited by a clandestine organization who supplied their recruits with passphrase protected computers, and in hendriksen's case the passphrase was "Soylent Green is people" (a quote from the movie soylent green).

this adds another layer of constraints on the set of probable values and my gut instinct is that this constraint is so significant that rather than mounting a full brute force attack it would be feasible to simply mount a dictionary attack. i will concede that there are a very large number of culturally significant phrases that people might choose from, but i doubt the cardinality of that set differs significantly from that of the set of words in an actual dictionary. furthermore, it wouldn't be too difficult for an attacker to trick people at large into generating a dictionary of passphrases for him/her simply by setting up a free website that requires an account logon in order to access porn.

as such, i question both the idea that passphrases will be easier to remember when used properly and that a passphrase is more computationally expensive to attack. i think the strength gained by increasing the length is a phantom improvement, that the added strength is thrown away due to the various additional constrains on the set of possible values. i think there will be no ease of use benefit as the sheer number of passphrases will still require them to be recorded somehow (and since they're so much longer than normal passwords, recording them all will be more onerous). i like the idea of expanding password fields to accept more characters, but i see no reason to use that extra space for anything other than longer strong passwords.

Wednesday, July 01, 2009

about face on bruce schneier

well, it was nice to see something as fundamental the issue of password masking questioned (usability expert jakob nielson's original article, bruce schneier's reaction), and then answered (rik ferguson's response, graham cluley's response). i wonder if that indicates we're collectively ready (some of us have been individually ready for a while) to question something equally fundamental like the concept of 'security experts'.

i say this because, as was alluded to by rik ferguson, bruce schneier seemed to be pulling 'blatantly evident factoids' out of his ass (as 'self-appointed authorities' tend to do).

empirical evidence of the decline of shoulder surfing, even if it exists, wouldn't be able to support the implied assertion that it would stay in decline if password masking went away. in fact, the opposite seems much more likely. while it's true that a determined shoulder surfer would be able to figure out your password from your keystrokes, without the password mask even the most casual shoulder surfer would easily be successful. remove the control that makes something rare and it won't be rare anymore.

but i digress. this post isn't about password masking, it's about 'security experts' - and i find myself in a bit of a conundrum because, while i would normally be decrying schneier's posting of material obviously outside his field of expertise, as someone who not only doesn't lay claim to the title of expert but actively rejects it would i not be practicing hypocrisy?

to answer my own question: yes, yes i would. stop that.

the problem isn't (or shouldn't be) that schneier (or anyone else for that matter) posts on topics outside their field of expertise. he should be afforded the freedom to post uninformed opinion just like everybody else. in that regard the problem isn't really bruce schneier at all (even if he is a FUD spreading self-described media whore), the problem, dear reader, is you - for not recognizing how narrow a scope expertise has, where schneier's is, and consequently when to hold his statements up to greater/lesser scrutiny (note: if you have figured out that cryptography is his area of expertise and that you should question everything else then i'm not actually talking to you but everyone else).

and yes, i am aware of the implication that you should then question everything i say - that is actually precisely what i want. one of the benefits from not being an expert that i'd like to enjoy is people not blindly accepting everything i say and actually challenging me when something i say doesn't fit with something they know. i actually like arguing, i find it to have useful properties in the gaining of knowledge, and i think it's a shame that so many seem to value the finding of consensus over the finding of correctness (nevermind the tendency to use authoritative quotes as a replacement for critical thinking). oh well, at least nick fitzgerald and vesselin bontchev were still willing to expound at length on the topic of malware last i checked (though it has been a while).

month of 'do as i say'

well today is the first day of july and that means it's also the first day of the month of twitter bugs. like past 'month of X bugs', each day a security vulnerability relating to the subject product/service (in this case twitter) will be disclosed to the public.

why are they disclosing these vulnerabilities to the public? as with any exercise in full disclosure the idea is to force the company(ies) involved to clean up their act and fix the disclosed vulnerability. full disclosure puts the information into the public sphere so that many people are aware of it and something as attention-grabbing as a 'month of X bugs' captures even more mind-share and gets it that much closer to the mainstream.

in theory this is good for the public. in theory the researchers doing the disclosure are doing a good deed because they care about security and the company(ies) they're pantsing are lazy, arrogant boors who don't care about security or the public and who need to be forced into doing the right thing. and supposedly fixing the individual bugs the vulnerability researchers identify is the right thing to do.

but what if the theory is wrong? not all companies are created equal (nor are all vulnerability researchers for that matter). nobody knows what goes on inside a company except the people inside that company. it's possible that a given company may actually be working on a comprehensive upgrade to their security posture that would obviate swaths of bugs including the ones that get identified by outside researchers. but that would have to be delayed and the company's time wasted so that they can chase down these individual bugs one at a time in order to appear to be doing something. such is the consequence of shifting vulnerability notification to the public sphere.

now, i'm not going to be disingenuous and suggest this is what happens all the time or even most of the time, but being in software development i feel confident that it is happening some of the time.

external full disclosure practitioners have no way to know that, however, nor have i seen any indication that they care. despite the warm and fuzzy ideology behind the practice, full disclosure is an application of force. it is an exercise in coercion, an example of using public relations to bend a company to the vulnerability researcher's own will because supposedly the researcher knows better than the company what is best for the company's users.

i'm no more a fan of lazy, arrogant, boorish companies than the next guy, but belligerent 3rd parties leave a bad taste in my mouth too.

live by the sword, die by the sword

so yesterday i got a rather rude surprise by email. google, in their infinite wisdom, had decided to label this blog as spam and had locked it, preventing me from publishing what i had planned to publish. if i hadn't acted within 20 days the blog would have been deleted, apparently.

yes, it was the real google and not a spear phishing campaign (it would have made for good social engineering but who'd spear phish me?). they used the exact email address that i only gave to google for my blogger account, and furthermore, when i visited my blogger dashboard (not following the link in the email) it showed the warning about it there plain as day so it was definitely for real.

according to the literature i was seeing, the system by which they identify splogs is automated. they mentioned fuzzy logic, but we know what that means - heuristics (though not the same heuristics used in anti-malware, obviously). although i'm not exactly some hardcore heuristic fanboy i can't help but feel there's a bit of irony in me (or more specifically this particular blog) catching the business end of a heuristic false positive.

there is, of course, an appeal process they give you (so that your blog doesn't get deleted after the 20 day grace period) but i found it a little annoying that there was no indication anywhere that i'd successfully initiated that process and it was only when i checked again today and found the splog notification gone that i had any clue that anything had gone through. the real test, of course, is publishing and that's part of what this post is meant to achieve - if you're seeing this then the blog is back to normal.

Sunday, June 14, 2009

understanding malware growth

(this is actually kind of old now, but real life has a way of interrupting this whole blogging thing)

alex eckelberry posted some graphs depicting the size and growth of the malware samples collected by andreas marx/ what most people will take away from it is that malware is growing at an incredible rate, and that's kind of depressing. there's more to the graphs than just that however, and not all of it is depressing - in fact it looks pretty good to me.

but to understand this i think i need to point out just how bad the forecast line of the graphs is. the line represents the expected value according to a particular model of malware growth. of course models and reality never match up absolutely perfectly, but the forecast suggests a fairly simplistic model and it appears that as time wears on it diverges more and more from reality. let's construct a model bit by bit based on some reasonable assumptions and see how close it matches reality.

let's first suppose that there are a constant number of malware creators with a fixed amount of resources (i don't mean money or computing power, i mean man-power resources - you can only do so much in a day, that sort of thing). let's further suppose that the amount of malware a malware creator can create with X number of resources is constant. what would that look like?

this depicts constant growth with a linear increase in population over time. this is not the final model, of course, but simply a starting point. one of the basic flaws with it is that it assumes that the population of malware creators themselves is constant. what if we instead suppose that the set of malware creators themselves were growing at a gradual but constant rate, what would that look like?

this depicts linear (rather than constant) growth in the malware because as the set of malware creators increases over time, so too should their malware output. it also shows a curve in the population vs time graph, although the curve isn't quite as pronounced as the one depicted in the forecast on the population graph on alex's blog and that's because in that case the forecast on the growth graph predicted non-linear growth.

how could we get non-linear growth? well another of our starting assumptions that we need modify is that the number of malware samples a malware creator can create with X number of resources is fixed. in reality malware creators come up with innovations and some of those innovations improve the efficiency of their malware creation efforts such that they can create more samples with the same effort as before. minor innovations in this area (minor in the sense that the efficiency improvement is small) probably happen all the time but major innovations are probably rare. furthermore, adoption of such innovations is not immediate, it takes time for the idea to spread amongst the malware creators. finally, such innovations may not apply to all sorts of malware so it may be that only part of the malware creator population ever adopts the new methods.

so now let's suppose one of those rare groundbreaking innovations happens - one malware creator comes up with a way to increase his malware creation rate by an order of magnitude. then he tells 2 friends (it may not actually work quite that way but one way or another the idea gets communicated to others) and they tell 2 friends, and so on until eventually they approach a peak number of creators using the new method and the rate at which creators switch from the old to new method slows to a trickle. what might that look like?

this depicts the same linear growth as before at the beginning (with the scale adjusted) followed by a period of rapid expansion as the new method reaches it's saturation point amongst the malware creators (most, but not all of them adopt the new method), and then it returns to mostly linear growth but with a stepped sort of appearance as individual adopters of the new method gradually continue to trickle in.

now, i don't know about you but to me this is starting to look rather familiar. it's not quite the same shape as the actual figures, mind you, that looks a lot less smooth than this, but it's a better approximation to the shape than the simple curve used in the forecast and it's based on an entirely reasonable set of forces that would affect malware growth.

as a final step, let's try to imagine why the real values don't follow such a smooth curve. so far we've been assuming that the population of malware creators only increases over time, but that's not entirely true, is it. there are any number of reasons why individuals might leave that population either temporarily (due to things like criminal prosecution, or collapse of particular infrastructure they were using), or permanently (due to things like retirement or death). if we were to take some of the malware creators in our model out of the picture temporarily at various points (and assume that the ones that ones that leave permanently will simply be replaced and so have the same effect as a temporary departure) what might that look like?

as you can imagine, because the malware creators using the new methods contribute so much more to the growth of malware than do the ones using old methods, when the population using the new methods drops slightly it has a much more noticeable effect on the graph than when the population using the old methods drop.

so how reasonable are these assumptions? are malware creators resources fixed? i've yet to see a way to increase the number of hours in a day so any individual malware creator is still limited in the number of man-hours he or she can devote to the task of malware creation.

can an innovation have a huge effect on the number of malware samples created? well just look at the massive number of variants that server-side polymorphism has lead to. you can basically think of those setups as variant factories and once they're constructed they really don't require any extra effort on the part of the malware creator in order to create more samples - it can be entirely automated and is then only limited by the number of victims who download samples.

would adoption of such an incredible innovation really be limited? again, look at server-side polymorphism, how well would that innovation apply to malware such as autorun worms? not that great actually. if you think about it you can probably come up with other examples of malware types and malware techniques that don't naturally go well together.

do people really leave the malware creator population either temporarily or permanently? well first of all people do actually die, and malware creators are among them so yes people do leave permanently - and again, with the example of server-side polymorphism, if the servers go down because for example the hosting ISP gets de-peered (as has famously happened a few times now) then yes the creators will temporarily be removed from the malware creator population until such time as they can set their operation back up somewhere else.

so what else can we take away from those graphs at alex's blog besides the fact that malware is increasing at an alarming rate? well, for my money it's that a big chunk of that alarming rate was due to an innovation (probably server-side polymorphism) that created a rapid expansion in the malware creation rate and that that rapid expansion seems to be over (in fact it looks like it came to an end nearly 2 years ago). that's not to say malware growth will begin to slow of course, but simply that the really scary part is behind us - the malware growth rate is still enormous, but it's not increasing at an exponential rate anymore. maybe a new innovation will come along and take us for another rough ride, maybe not, but as it stands right now catching back up to the bad guys (assuming you were of the school of thought that we needed to) looks a lot more feasible than it did before.

Monday, June 01, 2009

now read this!

there was a while there where i was posting links to other peoples blogs to help drive traffic to what i thought were good posts. i fell out of that habit, however, as it became forced and formulaic.

that doesn't mean i don't still come across good posts, but good just isn't really good enough - the criteria of "good" is too nebulous, it gets watered down and loses all meaning.

so now the fact that i'm pointing out a good post again actually means something because it's not just good - it's good enough that i felt moved to do this:

richard bejtlich's post on obama's cybersecurity speech (despite our philosophical disagreement over the classification of malware as a threat) was an excellent bit of creative re-writing of the original speech. for a moment i almost thought i was reading sun tzu again (yes, i am one of those people who reads and re-reads the art of war). if only such insight could have really come from the top like that (or at least been repeated by the guy at the top for everyone to hear).

Saturday, May 16, 2009

fred cohen says anti-virus doesn't work

the EICAR (european institute for computer antivirus research) conference was held earlier this week and something which may at first blush seem quite curious happened... fred cohen (a keynote speaker, the father of computer viruses, and the man behind the seminal academic treatment of the subject) said that anti-virus doesn't work...

i wish i'd been there to see that (actually no i don't, i almost certainly would have rolled my eyes, crossed my arms and gone to sleep), but since i'm not part of the industry and flying over to berlin on my own dime is kinda out of the question i had to miss out... it seems i wouldn't have even heard about it if not for randy abrams posting about it here... i've waited patiently for a couple of days now for anyone else to write something about it so i could get more details of what was actually said, but that doesn't seem to be forthcoming... even eicar's site only tells me that the title of his talk was supposed to be "Computer Virology: 25 Years From Now"...

so, since i don't have first hand experience of the conference to give me a clue as to what on earth he was thinking, i went and did some research to see if anything he's said or written in the past might give me a clue and i believe i've found what i was looking for...

in an article on his site titled "Unintended Consequences" it starts to become clear that he feels the computer security industry is in the practice of detecting and reacting to bad things instead of preventing and deterring them, and he paints that in a rather negative light... when i read this i was struck - if he thinks anti-virus (probably one of the most mainstream parts of the computer security industry) is about detection and not prevention then he's doing it wrong!... in my earlier post on defensive lines in anti-malware i illustrate pretty clearly how anti-virus technologies (both the ones that security numpties consider av and the ones they don't) are employed as preventative measures, preventing such things as access to threats, execution of threats, and behaviour of threats (among other things)...

the thing is, fred cohen is really not the sort of person you'd expect to be using av wrong... thankfully i found a pair of interviews with him - one called "Three Minutes With Fred Cohen, Virus Trends Tracker", and the other "Fred Cohen - Not all virus writing is a crime"... although anti-virus is certainly used for prevention, it's underlying mechanisms are largely detection-based - however when cohen talks of prevention he's referring to the type where the underlying mechanism is sort of like an immunity... it all became clear to me when he said we should be using systems that are less susceptible to viruses in the first place - systems that have more limited functionality... this relates back to his 1984 paper Computer Viruses - Theory and Experiments where, in the section on prevention (i really should have looked there earlier) he discusses the 3 key properties of a system that allow viruses to spread - sharing, transitivity of information flow, and the generality of interpretation...

systems with limited functionality refers specifically to the third of these properties - the generality of interpretation - by limiting the functions that a system can perform you limit what can be done as a result of interpreting data as code... cohen poses the examples of turning off macros in ms word, or turning off javascript in the browser - i would follow those up with more contemporary examples such as microsoft recently announcing the disabling of autorun for flash media, or the increasing trend (at least among security conscious users) to use alternate PDF viewers as opposed to Adobe's own Acrobat Reader which has been bloated with unnecessary (and frankly unsafe) functionality for a long time...

i'm not going to try and detract from his arguement by saying the idea doesn't have merit - it does... i even considered whether my defensive lines post needed another line regarding system immunity, though i later realized that limited functionality is already covered by existing lines (as it's just another method of preventing exection and/or particular behaviours)... the problem isn't that his argument in favour of using this technique has merit, the problem is figuring out how he could think anti-virus is broken and this technique isn't - or what he thinks the security industry can do about limiting functionality...

the very existence of the security industry underscores the fact that there has historically been no (and largely still is no) place for security in the rest of the computer industry... limiting functionality is the domain of the original product developers, not 3rd party security companies - the norton's and mcafee's of the world don't have the freedom to cripple Acrobat Reader for example (at least not without getting sued)... and the original product developers by and large aren't limiting functionality because that goes against consumer expectations for technology... technology is supposed to empower us to do more, to do things better/faster, etc - so it's a given that when we choose something we expect to empower us, we aren't going to go with the less powerful option... this is born out by history as well because there have always been less functional/powerful options available and the market has consistently selected against them in favour of the bloated product suites and application platforms...

furthermore, limiting applications is all well and good but there's plenty of malware out there that exists as native binary executables, and limiting the operating system's functionality (or worse, that of the hardware itself) to prevent things like self-replication or botnet command and control communication is a far trickier proposition...

so given the difficulty in getting more adoption of limited systems and given the fact that even cohen isn't suggesting limiting things to the point of systems with fixed first order functionality, i can't help but wonder how he could consider this any less flawed an approach in practice than anti-virus... it is clear to me that it will be just as possible to work around the limitations he's proposing as it is to work around anti-virus technologies, and limited functionality defenses are a lot less agile than things like known-malware scanning so when those special cases come along it will cost a great deal more to redesign a system to further limit functionality so as to deal with such special cases...

that should give you a clue as to my opinion of limited functionality-based defenses - they have promise as a complement to anti-virus techniques... anti-virus may not work in the sense that it's not completely effect at preventing malware infestation, but limited functionality systems aren't either... anti-virus may not work in the sense that it's a computationally expensive approach to prevention, but limited functionality has logistical costs for development, marketing, and maintenance that we may never completely overcome... i fully endorse the idea of selecting software that only has as much power and functionality as you need and no more, but i still think you should use anti-virus right along side that...

frankly, at the end of the day, the main difference between systems that are designed to have more limited functionality and ones where anti-virus is deployed is that the former's limitations are baked in while the latter's limitations are bolted on (because like it or not when an anti-virus stops a piece of malware from executing on your system it is effectively limiting your system's ability to perform the function that malware represents)... that appears to be what cohen's real contention is about - that baked in is better than bolted on... in some ways that may be true, but in others (like the matter of agility) it's definitely not (because it's easier to change the bolt than it is to change the whole system)...

Wednesday, May 06, 2009

defensive lines in anti-malware revisited

now, i realize i've covered the topic of defensive lines in anti-malware before, about 2 years ago, but i've refined my thinking since then and now i've found the motivation to share (guess i was just being lazy before)... this thread on wilders security forums made me realize that there are some misconceptions about defensive lines in anti-malware... specifically the ideas of what can be the first or last line of defense seem to require some refinement so here goes...

if you're thinking about multiple lines of defense then it makes sense to consider that there is a sequence of defensive lines at various 'distances' from the final outcome of complete system compromise... that is to say that there are various opportunities (some earlier, some later) during the course of a piece of malware's progression towards compromising a system at which you can prevent a system from becoming compromised, and each of these points a defense can be mounted... the first line of defense, therefore, should be the one furthest away (or at the earliest opportunity) from that possible outcome... likewise the last line of defense necessarily must be the last chance for preventing that outcome...

so what is the first possible line of defense then? well, first opportunity you have for avoiding having your computer get infested with malware is the point of initial exposure - if you don't encounter the malware in the first place then there's nothing else to say about it... when it comes to defending oneself the main defense here is actually risk aversion behaviours like not going to so-called 'dodgy' sites, not plugging strange flash drives you found on the ground into your computer, etc... when it comes to defending others, any malicious site take-down effort will also help prevent people from coming across the malware on malicious sites that get taken down... somewhere in the middle are tools like mcafee's site advisor, or google's malware warning functionality that can help users determine which sites to avoid...

the next logical point at which you can prevent compromise is preventing the transfer of the malware onto the machine we're trying to protect... for automatic transfers (such as drive-by downloads or vulnerability exploiting worms) the primary defense is closing the avenues by which such automated transfer can take place, usually by applying security patches (at least where browser exploits are concerned)... a basic NAT enabled router can also protect machines behind it from certain types of automatic transfer by virtue of preventing unsolicited traffic to those machines... for manual transfers, risk aversion behaviours again play an important role - only downloading software from reputable sites (preferably direct from the vendor's site) being a prime example... a somewhat after-the-fact but related defense is on-demand scanning all incoming materials (whether you think they could carry malware or not) before allowing them to be accessed for any other purpose (sort of like a mandatory quarantine period)...

after potential malware has successfully been transferred onto the system the next opportunity to prevent it's progression to full compromise is preventing access to it... this is where on-access scanners come into play... if the user can't access the malware then there should be no way they can trigger it...

if your malware access controls fail to prevent access (say because the malware is packaged within some obfuscating wrapper, or perhaps the malware is just new) then the following point at which full compromise can be avoided is by preventing execution of the malware... on-access known-malware scanning can prevent execution as well, at least for known malware (if it's in some obfuscated wrapper like a dropper, then when the malware is 'dropped' it will no longer be obfuscated and that camouflage will no longer be protecting it from the scanner)... application whitelisting can also prevent a great deal of malware (even that which is too new to be considered 'known' yet) from executing (by virtue of preventing everything that isn't already explicitly allowed to execute from executing) so long as the user doesn't give the malware permission to run...

after malware has successfully begun to execute on a system, any certainty about preventing that malware from compromising that system is gone, but that doesn't mean there aren't still possibilities for prevention... it may still be possible to prevent compromise by preventing one or more of the malware's bad behaviours using behavioural blacklisting, behavioural whitelisting, or some combination thereof (all of which falling under the general umbrella of behaviour blocking)... in so far as certain behaviours involve accessing system resources that can have their access restricted, operating as a user who doesn't have access to those resources (ie. running as a non-administrative user, following the principle of least privilege) will prevent a great deal of existing malware from being able to operate properly and thus can prevent compromise...

finally, if malware gets past all those previously described defensive lines, there's still one possible opportunity to full system compromise... if you can avoid the consequences of the malware's behaviour by being lucky enough (or by managing the circumstances well enough) that the malware was actually running in a sandbox rather than on the main host system then it will be the sandbox that gets compromised rather than the main host system... sandboxes are such that their compromise is usually inconsequential because they can be regenerated easily - though extrusion of sensitive data may still be a problem, if the malware was contained withing a sandbox then compromise of the host will have been avoided... otherwise the system will have become compromised, prevention will have completely failed, and it would now be an issue of detecting the preventative failure, diagnosis to determine the extent of that failure, and recovery from it...

you can make any one of these stages your own first line of defense, but only by ignoring earlier opportunities to defend yourself... likewise, you can make any of these your last line of defense, but only by ignoring subsequent opportunities to defend yourself... think about what that statement means: ignoring opportunities to defend yourself... is that really something you want to do? does it sound wise? it certainly doesn't sound that way to me...