Anoynmous Own3r – The
Hacker Script Kiddie that will forever go down in history.
Yesterday Godaddy experienced an outage which an alleged hacker claimed to be responsible for, but Godaddy ultimately shot down. Since then he has made numerous attempts to save face, but only shows more and more the lack of skills and knowledge about technology and hacking he actually has by posting source code from an open-source project to connect to GoDaddy that’s publicly available, and written in a different language than GoDaddy develops in but claims it’s their source code, he also showed a screen shot of an apparent attack on their site which was nothing more than injecting text in the URL of the site that gets spit out on the page, nothing was actually hacked.
Even if it were an attack and Godaddy is lieing, Having access to a botnet, entering an address and pressing a button doesn’t make you a hacker anymore than being able to drive a car makes you a world renowned mechanic.
It’s quite amazing just how much press and twitter followers his claims have generated, I guess the media loves the idea of an attack by Anonymous, it sounds better than saying GoDaddy experienced technical issues that caused a large outage.
The media, and all the people who read/watch it just love their fear stories.
I’ve created a 12 minute demo video of the current progress of Skynet, an open source Framework for building High Availability Distributed Services.
I’m going to be sharing some knowledge about the Go programming language and Skynet (an open-source project i’ve been working on) with the Tampa.rb guys this Wednesday night.
Wow, hard to believe 2 years have passed since I have posted here, I think it’s time to kick things back into gear now that I have more free time. Many people who know me, are aware that psychology is a big hobby of mine, the way people interact, the decisions they make and why they make them is extremely intriguing. This article will be the start of a long series of articles regarding the psychology of a programmers daily decision making process.
The topic of discussion today is fallacies that effect programmers on a day to day basis, also referred to as “cognitive biases”. A quick look on wikipedia yields us this useful description:
A cognitive bias describes a replicable pattern in perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality. They are the result of distortions in the human mind that always lead to the same pattern of poor judgment, often triggered by a particular situation.
What does this mean? It means due to emotional irrationality you are prone to make poor judgements and decisions based on previous experiences even if the opposite outcome is better for you in the long run. Even a specific way a question is worded can make you choose opposite choices.
The first fallacy i’m going to talk about is the “Sunk Cost Fallacy”. I’m going to quote Peter Michaud‘s post here because I really like his description
Sunk Costs are costs which have already been incurred and cannot be recovered. The Sunk Cost Fallacy is a mistake in reasoning in which you consider the sunk costs of an activity (instead of the futurecosts) when you decide whether you should continue the activity or not.
As developers we experience this problem every day while coding, the language our app is written in, libraries we are using, a particular design pattern or architecture we have chosen to use. Deep down we know this is not healthy for the maintainability of our app down the road, but yet we find it hard to scrap it and start fresh. Why is this? It all boils down to loss aversion.
loss aversion refers to people’s tendency to strongly prefer avoiding losses to acquiring gains. Some studies suggest that losses are twice as powerful, psychologically, as gains.
You see, although refactoring, or rewriting all together seems like it may be a lot of work, even though we know it’s ultimately best. It’s not the idea of that work that chooses us to continue painfully stuck in our old decisions. It’s the pain of feeling like all the time, money and resources spent on the old implementation was a complete waste, by changing to something different we have to accept the fact that we threw away all of that, it was worthless, or was it?
I really love some of the examples on You Are Not So Smart‘s article. It talks a lot about how the sunk cost fallacy and loss aversion keeps games like Farmville thriving. It’s worth the read if you can spare the extra couple of minutes. I’m going to quote an example from the article.
If you dropped your cell phone over the edge of a cruise ship, you would need James Cameron’s unmanned submarine fleet to find it again. Sure, you could spend a small fortune to retrieve it, but you wouldn’t throw good money after bad. Laid out like this, logical and rational and easy to pick apart, you can pat yourself on the back for being such a reasonable human. Unfortunately, the sunk costs in life aren’t always so easy to see. When something is gone forever it can be difficult to realize it. The past isn’t as tangible a concept as the sea floor, yet it is just as untouchable. What is left behind is just as irretrievable.
He’s right, in a situation like this the answer is clear, it’s gone, move on, but when it comes to the things we hold more dear and slaved over the decision becomes clouded.
But let’s really think about this, you replace your car every so many years, you replace warn clothes, shoes and many other things. You paid money for them, and sometimes quite a bit, but when they’ve seen their day it’s time to move on.
I’d advise that we keep this into consideration when we are deciding to deprecate whatever library, language, framework, or implementation that is no longer as usefull as it once was and is just causing more work and pain to keep stable and maintainable, just because we may feel that we wasted time and resources. The time is gone already, irretrievable. You will never get it back, and you will never get back the time you continue to poor into something that is just no longer useful.
The fact is that it was useful for a period of time, it served it’s purpose well. Pat yourself on the back for that, and then pat yourself on the back for the next rendition of it. Your time was not wasted, remember that. You performed a task to get your app to what it is today, and it was the best choice at the moment, but make sure to consider if it is still the best choice moving forward, or if it’s just going to cause you more greif than anything.
The past is irrelevant in this particular decision. What matters most is what is best for your future as well as the app / codebase. Choosing a new direction does not mean that the old choice was a bad one. It was a learning experience and something to grow on. Growing out of something is a good thing. Your users evolve, the company evolves, it grows, it adapts, and so should the code base. Your employer would not stay in the same building they invested 2 years in, and lots of money if they needed more space to accomodate all the new hires. You shouldn’t stay with the same technology decisions if the direction of your app is changing either. It served it’s purpose for the time being, and it was great. What’s next?
Is local multiplayer dead?
What happened to local multiplayer? I’m tired of dropping $60 (over 1/10th the cost of the console itself) on a game that I can’t enjoy with my friends. In the same house. Which you have been able to do since the home version of Pong in 1975.
Fight the guy you can’t see
I recently purchased the new UFC game for PS3, which is only single player locally, 2 player online. I have been having friends over and shit talking while kicking their ass in a fighting game since I was a kid, Karate Champ allowed 2 people to fight locally in 1984. We all grew up playing Street Fighter and Mortal Kombat in the late 80′s and early 90′s. How the hell is it that it can be 2009 and we are making leaps and bounds in the technology for these systems, but removing fundamental concepts that got people into video games in the first place.
We have wireless controllers, wireless internet, live updates of firmware, updates for gameplay. Game consoles with more processing power then some computers, projects have even been created to cluster them into super computers. Video games are output in higher quality then most television channels and even some movies. Cut-scenes have become almost cinema quality. Real time lighting and shading, and physics engines that make almost any action realistic.
These days most people who can afford the expense of 7th Generation game consoles have large tv’s 50″-73″ some even bigger. We have been playing split-screen games when the biggest TV any of my friends and I had was 19″ and now that we finally have TV’s big enough that it does not matter if we split the screen we no longer have the ability to do so.
You can play 16 guys across the world multi-player but not the guy next to you, Some MMO’s are capable of supporting hundreds of thousands of simultaneous users. You have voice chat, and buddy-lists to accomodate you interacting with friends, but there is nothing for you and your friends if you want to hang out in the same house and have a few beers.
I am just finding it hard to believe that with all this technology, and the size of our TV’s that game producers are ignoring a fundamental feature of video games. This is why many of us started playing video games in the first place. Video games are no longer something I can do with my friends when they come over, unless everyone is up for a game of Little Big Planet. There are very few games you can enjoy with your friends.
Anti-Social or Money Hungry?
Has society become this anti-social that the only important part of games is to focus on single person interaction. We don’t want to encourage people to invite their friends over for a friendly ass kicking in the newest fighting game, or racing game? or is it just that the corporations that develop these games are so hungry for money that they put people in a position to buy more consoles and more games so they can set them up in multiple rooms in order to get somewhere close to the same interaction they had as kids?
Whatever the reason, I hope this is a short phase that will soon be over. I wont be playing many new video games until I am able to look my friend in the face while I tell him he is a shitty driver, or that i’m going to school him in the newest boxing game or whatever it may be.
While working on some front end optimizations for a venture of mine I went on the lookout for a better bundling strategy then that provided with the default rails stack.
Enter asset_packager. asset_packager is a project created by Scott Becker. Which does exactly what I was looking for. You configure your bundles and the order in which files are placed in the bundle through an easy to configure yml file. And you just run a rake task when deploying to create the bundles. I won’t go into detail here about how to use it; but feel free to follow the link to the project page here http://synthesis.sbecker.net/pages/asset_packager
One thing that I did notice while looking into this project was that it uses JSMIN ported to ruby. I am personally a fan of YUI Compressor so I opted to go with a fork of asset_packager by Erik Andrejko found here http://github.com/eandrejko/asset_packager/tree/master which uses YUI Compressor for its minification process.
On to the good stuff.
As happy as I was with what I had found, it seemed to still have some inherent problems.
- When bundling CSS if you are using relative paths, when they get bundled into a file in your base directory your paths will no longer work, so you need to manually modify your CSS files to have absolute paths to your images. Which can often be part of plugin or library that you are making use of like jQuery UI.
- If you have implemented far forward expiration dates like you should on your media (my next post will talk about this), your CSS files do not have a cache buster to ensure that when your media is updated the cache is expired
- Rails has the concept of an asset_host, it creates a key based on your asset name which will always map to the same domain, and spread the media across asset_hosts by supplying a configuration option in your environment file to help overcome the limit of connections per domain. Well if you design sites the way I do, I have very minimal links to media inside my markup, its all contained in the CSS, so I make very little use of the image_tag helper and therefore little use of the rotating asset_hosts
I have created a fork of Erik Andrejko’s repository for asset_packager and implemented solutions for all of the problems I found above.
During the bundling process it will now determine absolute paths of your assets and use those in the bundled file only. Leaving your originals untouched.
Paths to images are now appended with a cache buster using the same approach as rails, determining the last modified date and appending a timestamp to the url. Now when running your rake task for bundling you can rest assured that new versions of your media will be seen if you have set far forward expiration dates.
If the environment you are running the rake task against has an asset_host set for rails to use asset_packager will pick up on it the same way rails does replacing %d with 0-3, the same asset will always get the same hostname to ensure caching works properly.
While my fork with these changes is of Erik Andrejko’s repository none of the changes are specific to this fork. They can easily be added to Scott Becker’s original implementation. The only changes were the addition of a single method, and calls to it from within the bundle method, as well as a change to the rakefile to include the environment so that the method would have access to the rails configuration.
my github repo can be found here: http://github.com/erikstmartin/asset_packager/tree
I’m open to any comments or questions you may have. Let me know if you find any problems or if you just want to tell me how useful you have found the changes.
update: Erik Andrejko is quick! It appears my changes have already been merged into his branch
I noticed a couple of days ago there are some unexplained errors when running rake spec with the rspec rails gem. That appear to have been there for a while. 3/07/2009
when running script/generate rspec new rake tasks are placed in lib/tasks/rspec.rake
if you open this file you will see that line 101 is the offending line
::STATS_DIRECTORIES < < %w(Routing\ specs spec/lib) if File.exist?('spec/routing')
::STATS_DIRECTORIES < < %w(Routing\ specs spec/routing) if File.exist?('spec/routing')
I sent a message to David Chelimsky so this issue should be resolved in the next release.
I previously posted a rant and a bit of a story telling article about some of my horrible experiences with recruiters. Well about a month ago I had yet another experience. I give to you the proof.
I was sitting at home and I received this email. (shortened to just useful excerpts)
Trust you are doing well. I just left you a Voice Mail.
Please let me know if you are interested in the position bellow by sending me your resume. I will call you to further discuss..
Web Developer III
Location: Orlando, FL
Job Type: Contract
Duration: 9 months
.. long position summary ..
.. requirements ..
Significant experience with Content Management Systems
3 years experience with ASP/JSP/PHP or other server-side scripting language
Experience with Flash and ActionScript a strong plus
Experience with two and three tier web architecture.
Reading this I’m thinking wow, this sounds an awful lot like a position where I work, Web Developer III my work ranks our development positions I, II, III most places use Jr., Sr. etc. Ok, location Orlando, even closer. 9 month contract. Ok now this is getting eerie. Any contract position I have ever been offered is 6 or 12 months, occasionally i’ll get offered a 3 month, but 9 months is a Disney thing.
So on to the requirements, significant experience with CMS we heavily use cms’s ASP/JSP/PHP or other server-side scripting language at this point it has to be Disney, how many companies don’t care what language you have experience in? Disney has its own internally developed language so we hire from all backgrounds, but I’d guess the overwhelming majority of companies hire straight from the large pool of people that use the technology they implement.
But surely he couldn’t have emailed me an offer for a job at my current employer, after all he found me through my resume that is on monster.com (which I’d also like to mention hasn’t been updated in at least 6-9 months). I mean its the first entry in my previous experience section. Ok the suspense is killing me, lets just ask.
Is this position with Disney Internet Group / Disney Interactive Media Group / Walt Disney Parks and Resorts Online ? Based off the contract term, the position title, skill set they are seeking, and overall job description it sounds just like it?
it did not take long to receive a reply, maybe 15 minutes.
Thanks for your response. Yes the position is with one of the Disney groups. Would you be interested? feel free to send in your resume and I can call you back to further discuss the position with you. Feel free to contact me if you have any questions. My details are listed below.
This is the point where I yell some profanity, along the lines of you have to be f*in kidding me! I shouldn’t be surprised, but I am. Why on earth would you not read someones resume before contacting them about a position. I’m fired up now. and as usual for me I’m pretty blunt I feel something needs to be said. So this is my reply.
I can’t tell you how much this response disappointments me. The sad truth is that this isn’t the first situation like this that has happened to me either, and is almost a daily occurrence at the office. I believe that I speak for quite a number of professionals when I ask that you please read our resume’s before contacting us regarding positions.
You are contacting me about a position that I already work in, I have been working for Disney since January of 2007, and have been a full time employee of theirs since April of 2008 and still presently working there, had anyone looked at the first entry in the employment experience section of my resume they would have noticed that.
Again, please read our resumes before contacting us about positions that we already hold, or that have nothing to do with our knowledge and previous work experience, because our resume happens to contain some sort of keyword out of the job description.
Erik St. Martin
I know this probably won’t help he is probably on to his next victim, but it made me feel a little better.
I was wondering around today and happened to run into this super-lightweight cms called le.cms. Intrigued I continued to read about the benefits of the application and I read this:
The content is stored in text files, one per page, which means that no matter how many pages there are, page load time remains virtually the same, unlike a CMS with content stored in a database that takes longer and longer to query as more content is added.
I was shocked, they cannot be serious right? It seems as if in their opinion databases have been a waste of researchers time. I don’t know where to begin at dismissing this, I pose these questions?
- If flat files are so much better and faster why does the majority of software use databases, and why were databases invented?
- What do databases use to store their information? You guessed it files! except a huge amount of effort has been placed in making sure that I/O is optimized, as well as caching in memory things that are commonly accessed.
On to my question about your architecture, that no matter how many files its virtually the same load time! How much do you know about file I/O? If you have say 1,000 articles that have been placed on the disk through the course of 5 or 6 years I dare say these are going to be spread out across the disk, now your site that has 1,000 articles should have multiple users at the same time on, maybe in the hundreds? What do you suppose happens? There is going to be overhead while the disk seeks to all these different positions, maybe you’ll be in luck and the memory wont be reused by another process and the file will still be there for the second request.
On to scaling, when all this I/O and even just load becomes to much for one server, what is to be done? clustering should be fun, you will need to move these files to some sort of NAS device, and manage them from there.
It’s not that I don’t see this small lightweight cms as being useful, there are plenty of people out there that this is extremely useful for, but don’t play up your software by playing down proven technology. When using statements like this as benefits to your software you may want to do some research to see how accurate you are.