Saturday, August 25, 2012

The connection with EA Online was lost

I haven't felt like actually sitting down to play Battlefield 3 for quite a while. I played it a bit after I bought it, but the single player was mediocre at best, at I wasn't willing to cough up another $80 for my Xbox Live subscription just so I could play Battlefield online. Not when I had other options.

Earlier this year though, I had to move house. Once at our new place we had problems getting an internet connection, so I thought to myself "Hey, this would be a great time to finish off that Battlefield 3 single player campaign". So I pop the disk into my Xbox, and start up the game. I go into campaign mode and select "Resume Campagin".

I hear the rattle of gunfire, shouting, and then the screen goes blank, save for a loading icon. I'm booted back to the main menu, with an Error clamouring for my attention: "The connection with EA Online was lost". No sh**. There never was a connection.


Some problems with this scenario:
  • This is the single-player portion of the game. Why do I need to be connected to the internet at all times for a game I paid $100 for? Evidently being a good release-day customer counts for nothing anymore.
  • Why am I being booted from my single-player game after actually getting into it? If I need to be connected to EA Online (whatever that is), why can't it inform me of this at the menu?
  • What the f**k EA?
Ok, so I guess this is some of that fandangled new "Always-On" DRM. I thought we weren't supposed to have that on consoles, because we don't have the same piracy problem as PCs? Most console owners are law-abiding citizens right? Apparently not to EA.

Anyway, 3 months later (internet connection working), I decide to give the single-player campaign another go. I start up the Xbox, pop the disk in, and select "Resume Campaign". I get into the game, finally!

I frantically run from cover-to-cover, loosing rounds as I go to keep enemies in check. I'm pinned behind a bus shelter. Not the best cover. An enemy yells as he spots me, barely 20 metres from my position, and a hail of bullets start pummelling my cover. I swing out from behind the shelter, gun raised and... blank.

"The connection to EA Online was lost"

Almost a full minute, that must be a record. Frustrated, and slightly dejected that I ever paid $100 for such a game, I turn to the internet for answers. Why am I getting this error even when I have a valid internet connection?

Searching back through the history of various forum posts, I find that an update seems to have bugged out EA's "Always-On" DRM so that it boots players out of the game after about a minute or less in single player, even when connected. Various solutions include deleting save files and starting over, or deleting the massive multiplayer patch and re-downloading it (internet caps in New Zealand are pretty appalling). Deleting the problematic patch seems to be the best option.

In the past I've expressed a dislike for the sort of intrusive always-on DRM, but I've generally lived with it. It even irked me when I was without internet and I couldn't play my single player steam games without a valid connection to Steam. What have I done as a gamer to deserve this sort of criminal treatment? It's like going to a carnival with your parents, but they won't let you go on any of the rides unless they can hold your hand the entire time. I'm 24 years old, no hand-holding is required. 

I'll tell you what, I certainly don't expect to have to pay $100 for a game like this. Between the pretty half-hearted single-player (why include it at all?) and this frustrating DRM that only serves to lock me out of a game I already bought, I just don't see where the value is. I've always had fun with the battlefield series, but it just feels like EA and DICE have got so paranoid about loosing control of their game, they won't let anyone have any fun with it. 

What happened to serving customers first, what happened to creating great gaming experiences? My next game purchase will go towards supporting something a little less conventional. Something that gives me the freedom to play where I want, when I want, on the platform of my choice. Game developers are getting more freedom than ever by crowdfunding their own projects. Maybe it's time the gaming community at large joined them so we can ditch the restrictive thinking of publishers. We can vote with our wallets, and I certainly will be.

P.S. I think this kickstarter project shows real promise

Saturday, August 11, 2012

Changing Course

I sift through my pile of unsorted important documents, a stash I've been building and culling since before I left high school. Old bank statements, mobile service contracts, bills. The sorting is such a laborious task; I scan each document for a date, eyes darting from header to footer, paragraph to paragraph.

But every now and then I stop and stare at one of these ordinary pieces of paper as words catch my eye: my high-school yearbook, the written reference from my high-school fast-food job, the sign-up contract for the mobile phone I bought during my first week of university classes. Each memory has been collecting dust, laying unused and dormant while I go on with my post-university life.

I pick up an old birthday card and open the cover. It's from my parents, on my 21st birthday. They wish me luck on a planned overseas trip to the Gold Coast. 21? When was I ever 21? What was it like being 21? I just have to remind myself it's just another number. 21, 22, 23, 24. It feels like the prime years are already beginning to fade, like summer turning to autumn.


What have I done since those exciting and challenging years at University? I've tried to settle into a rhythm of nine-to-five; I've tried to fit in. How is it going? Not great. Life feels like it's flying by at an alarming rate, like there's nothing happening to me that's noteworthy enough to justify even a brief pause. Since graduating from University the days have blurred together. My environment is no longer dynamic; constantly changing lecture schedules, classes and topics have been replaced by a monotonic ebb and flow of daily life. Same office, same co-workers, same project, same technology. Days and weeks are starting to flow into months and years. Like a runaway train steaming towards a precipice, I am acutely aware of the apparent inevitability of my future. I can occasionally catch glimpses of the edge as I steam forward.

I feel like I need to put on the brakes; I need to start regulating my own flow. I need to feel like I'm not just ashes and dust, blowing about in the wind. I need experiences and challenges that bring out the best in me. I need other people to share them with. I need change. I'll admit, I sound pretty needy right now, but I'm not afraid to admit that the one thing I do need is help.

That's why I've started trying to branch out. Rather than continuing along the same path I've been stuck in for the last couple of years, I'm trying to find other people who share my interests in computers, technology, and building things. I've been to a local maker space, and been impressed and excited by the projects other people are working on. I'm looking at joining a sporting club once this cold and miserable winter (ok, all winters are like that) is over; I'm thinking sailing. I'm going to book myself into a community course on woodworking to try and improve my fabrication and crafting skills.

I still don't know what to do as far as employment goes. Lately I've been working on a side-project that uses an Arduino as a radio relay station controller, and I've been able to sink huge amounts of my time into the project without getting bored or frustrated. How can I bring the same kind of enjoyment I get from this side project into my daily work? I think the main appeal of my side projects is just the huge amount of freedom, which is going to be very difficult to get at most day jobs.

Long-term it would be nice to have the following freedoms in my employment:

  • Freedom to choose when I work. I'm most productive first thing in the morning, and last thing at night. That big bit of day in the middle where I'd rather use the precious daylight to go for a run or bike is just a huge write-off.
  • Freedom to choose how I work. Clients shouldn't care whether their solution is an Arduino programmed in C or a 16-core server using Go and OpenCL, as long as it meets the requirements, and solves the defined problem. If interoperability with a certain language is a problem, make it a requirement. And yes, I want to know why you need Java interop.
  • Freedom to choose where I work. Programming is a unique discipline. Team members still need to talk to each other and their superiors or clients: I don't think that always mandates physical meetings. I'd be happy to meet you at a local Cafe to discuss the latest project prototype. I'll be in Auckland next week visiting my parents, but I'll still be getting work done.
I know I'm probably sounding really picky by now, but I feel like these 3 freedoms will help me feel like I am in control of my own life. I want to enjoy the time I spend working; I want to create something amazing and revolutionary; I want to work hard.


Friday, July 20, 2012

A Turning Point

I had a pretty rough life going through school, as did most who fall in the geek/nerd categories. I think that was what gave me the idea that I was entitled to an easy ride at some point. In those moments of absolute despair, I took hope in the fact that with so much suffering, there must be some sort of light at the end of the tunnel to look forward to. This was what kept me going through those very tough high school years, and at some point, it was all that I had.

Once I got past high school, I went to University, where I finally felt like I was in the company of like-minded people. To be honest I don't think I ever found anyone like me, but I at least felt safe in my daily life. Once at University I worked even harder than I had in high school, with the light at the end of that tunnel being the degree I would have, which would surely help in making life as easy as I had predicted in high school.

By this point you can probably see the problem that I have only just seen myself. All these years I had this idea that because of some emotional and physical suffering I endured in the early years of my life, that somehow qualified me to have better luck down the road than other people. The idea that solidified in my mind at the time was that pain and suffering would lead to a reward. In Cargo-Cult fashion I assumed that a reward would automatically follow my sacrifice.

This belief I have harboured for so many years has just led to more desperation and frustration with the fact that my life hasn't turned around at some point. My life is no easier now than it was when I was in high school, and it probably never will be unless I make the right choices.

In the end I have realised that the hard parts of my life are of my own creation. I have always been so resistive to bending or changing my thinking for other people. My logic was that changing something as core as what I felt about something emotionally would somehow dilute who or what I was as a person. That's not to say I never changed my mind on something or took on someone else's thoughts and opinions. It's just that when I became emotionally attached to an idea, I no longer felt that I could give it up easily without sacrificing my integrity and somehow cheating myself.

In reality this just made me hard to work with: people would not understand my attachment to a particular idea or opinion, and would either give up and leave me alone, or become actively aggressive towards me. This was what has made my life so lonely. I have created my isolation through a combination of defensiveness and a focus on the wrong goals and ideals. Luckily, a few people throughout my life have understood some of these aspects of my personality and have learned to live with me for who I am.

I have always been focused on being the "best X", where X has been school student, university student, and now hacker. During the later years of my university education and my early working career, I have been obsessed with becoming a "hacker". Not the type that breaks into other people's computers. The type that creates, tinkers and improves.

Although I wouldn't class myself as a hacker, I have always strived to follow the wisdom handed down by the community. Be good at programming, very very good. Always ask questions, never trust a black box. Do things in the smallest and simplest way possible (this one is ambiguous, some people seem to think it means produce the least code possible, while other people think it is about some greater concept of efficiency). Push the boundaries of what is possible.

Although the community and these mantras can be positive, I think I have allowed them to amplify the existing defects in my personality. Rather than taking these goals and using them in a positive and inclusive way, I chose to use them in a negative and exclusive way. Instead of choosing to maximise how well I worked when I was programming, I just programmed more. Instead of asking other people questions when I needed help, I chose to ask questions as challenges to other people, in a manner that showed off my existing knowledge, rather than augmenting it. Instead of doing things as simply as possible, I chose to do things the hardest way.

This is not as a result of the hacker community (though I do see others in the community making the same mistakes as me), but rather a result of how my personality amplified the ethos of the community, and vice versa. In the end this is my fault, I am the one that made the decisions that led to this point.

So what can I do to start improving these flaws in my personality, and their effects? Here's some things I think I could start with:

  • I need to focus less on the task of programming itself, and more on the periphery tasks that maximise how effectively that time is spent. I need to do documentation and reports, because those will help other people understand what I am doing, and they may even help me understand what I did when I come back to it later. Take some time out; exercise more, socialise and interact with other people, have some fun.
  • Be polite to other people. Refrain from telling someone something unless it's actually important. Just answer people's questions with exactly what they need to know, no more. I should always listen more than I speak.
  • I have to consider factors other than my own effort and enjoyment when engineering solutions. I can't always use my favourite language for programming a solution because I will be creating more work (and hence cost) for myself and others when I have to develop something from scratch. I need to give more weight to other people's ideas, especially when they come from those with more experience or authority than me.
  • I should stop hating companies that never did anything to hurt me personally. Most people are not going to try and exploit me. I need to stop assuming that working for someone else makes me a slave; I chose to work here. I need to remember that I have important customers who need the service I help provide. I should stop complaining about how unfair and evil big companies are; they're full of people just like me.
In the end I need to make these changes to live a happier and healthier life. I think it is possible to retain my goal of becoming a great hacker, and at the same time become a better person and enjoy life more. Time will tell.

   Nick.

Thursday, March 29, 2012

SICP Initial Thoughts

After installing Racket I opened my browser to MIT's excellent book, Structure and Interpretation of Computer Programs. I have been through a four year engineering course, but never in all my time have I come across a textbook quite as impressive as this. I only had to get as far as the foreword to be amazed by the level of thought and clarity of writing observed in this text.

It opens with a strong and declarative first paragraph.
Educators, generals, dieticians, psychologists, and parents program. Armies, students, and some societies are programmed. An assault on large problems employs a succession of programs, most of which spring into existence en route. These programs are rife with issues that appear to be particular to the problem at hand. To appreciate programming as an intellectual activity in its own right you must turn to computer programming; you must read and write computer programs -- many of them. It doesn't matter much what the programs are about or what applications they serve. What does matter is how well they perform and how smoothly they fit with other programs in the creation of still greater programs. The programmer must seek both perfection of part and adequacy of collection.
To me, this first paragraph for the entire book gets to the real heart of programming. What it is, why we do it, and how. It invokes the mystical and powerful nature of programming that has entrapped many a young technologist.  It describes the difficulties faced on the road to learning this dark art; and hints that the road will be a long one. Programming is an art; perfected through practice and discipline.
Every computer program is a model, hatched in the mind, of a real or mental process. These processes, arising from human experience and thought, are huge in number, intricate in detail, and at any time only partially understood. They are modeled to our permanent satisfaction rarely by our computer programs. Thus even though our programs are carefully handcrafted discrete collections of symbols, mosaics of interlocking functions, they continually evolve: we change them as our perception of the model deepens, enlarges, generalizes until the model ultimately attains a metastable place within still another model with which we struggle. The source of the exhilaration associated with computer programming is the continual unfolding within the mind and on the computer of mechanisms expressed as programs and the explosion of perception they generate. If art interprets our dreams, the computer executes them in the guise of programs!
This is such a beautiful description of our art form. It lays plain that the real product of a programmers' work is not code; it is a model. This model dictates how we interface between the real world and the logic of a computer. This model may be but a part of a larger model, an actor in a greater world. As programmers we wrestle with these models; we seek to understand and optimise the model, until we can finally fit it within the constraints of a computer program.
Unfortunately, as programs get large and complicated, as they almost always do, the adequacy, consistency, and correctness of the specifications themselves become open to doubt, so that complete formal arguments of correctness seldom accompany large programs. Since large programs grow from small ones, it is crucial that we develop an arsenal of standard program structures of whose correctness we have become sure -- we call them idioms -- and learn to combine them into larger structures using organizational techniques of proven value.
Now the writer speaks about the content of this book, and its purpose. Computer programs always start small. Large computer programs are often made of many smaller programs. Unfortunately, as larger programs form, it becomes harder to manage the agreement of specifications and implementations. This is where computer science can help. By forming standard techniques in the form of idioms and algorithms, we can ensure consistency and quality in the approach taken no matter what problem is at hand.

Next, the language used in this text is considered.
Pascal is for building pyramids -- imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for building organisms -- imposing, breathtaking, dynamic structures built by squads fitting fluctuating myriads of simpler organisms into place. The organizing principles used are the same in both cases, except for one extraordinarily important difference: The discretionary exportable functionality entrusted to the individual Lisp programmer is more than an order of magnitude greater than that to be found within Pascal enterprises.
I think this is actually a reasonable approximation of the difference between the widespread modern languages such as C# and Java in comparison to some more academic or experimental languages. The former often strive to build structures and machines that may be very static and uninteresting in nature, but will stand the test of time. The latter type of languages seem to be very much more dynamic and unstable in nature, often trying new and outlandish ideas in particular areas of the language for a potentially positive gain. The writer finishes by adding:
As a result the pyramid must stand unchanged for a millennium; the organism must evolve or perish.
This is also a great description of modern businesses; the current hive of innovation in silicon valley is all about this "fail fast, evolve or perish" type of environment; meanwhile many much larger companies (and industries) that have long stood the test of time are starting to crumble in the face of such opposition.

To me, this book has already stirred deep thoughts on the nature of computer programming, and of the technology industry at large, and I have yet to write a line of code. It will be interesting to see whether the rest of the book is as enlightening.

All quotes are sourced from the Foreword of the book Structure and Interpretation of Computer Programs by Alan J. Perlis.

Tuesday, March 27, 2012

A Journey into Functional Programming/Lisp

I've officially been a "Developer" for about 2 years now, but still I can't shake this feeling that ever since I left university my skills have been stagnating. Maybe it's imposter syndrome or something; I just don't feel like the reality of the working environment has lived up to the ideals I conjured up while I was still undergoing tertiary education, or that I have lived up to what I thought I was capable of. Anyway, driven by this feeling of insecurity in my current skillset, I've been doing a bit of research on Functional Programming (FP), and in particular the Lisp-like languages which seem geared mainly towards FP.

My interest in functional programming was spurred by this blog post which turned up on Hacker News. It doesn't completely explain FP, but it does a good job of summarizing a few key points which definitely got me interested enough in Functional Programming to try and learn a little more. After searching around the web for information on Functional Programming there seemed to be at least 3 main languages used for functional programming; Haskell, Erlang, and Lisp. Yes, I know this glosses over many other Functional languages. These were the ones that I identified as the most widely used functional languages. That sentence seems to be a bit of an oxymoron: my research also revealed that Functional languages are only used sporadically outside of a few specific domains such as academia (Haskell) or telecommunications (Erlang).

I settled on Lisp for multiple reasons:

  • Lisp is one of the oldest high-level Programming languages around, only rivalled by Fortran. It's been around for 50 years, and there's a good chance it'll still be around for another 50.
  • Lisp has a range of different implementations which apply different tweaks and changes to the language, but all keep most of the same underlying syntax. This means Lisp is actually an ecosystem of many different languages, from Common Lisp, to Scheme (a whole other sub-category of Lisp-like languages), to Clojure.
  • The language itself isn't that much of a jump from imperative programming. It is actually simpler than some mainstream programming languages; most Lisps break down into 3 main types of syntactical elements: lists, labels and atoms/values. It made sense immediately when I saw examples of it. I couldn't say the same for the first Haskell or Erlang code I looked at.
  • There's some great material available from MIT on learning Lisp, since they used to use it for their introductory computer programming courses, as well as artificial intelligence work.
As I mentioned, Lisp is technically a collection of languages which have been derived from the original Lisp implementation over time. Scheme is a sub-category of these derivatives that focuses on a very small and stable core that can be easily extended. This makes it an excellent choice for an educational setting. In my case I chose to go with one of the (currently) most popular Scheme implementations: Racket (formerly PLT Scheme). It comes with an Excellent REPL (Read-Eval-Print-Loop) environment and plenty of built-in functionality so programmers can get up-and-running very quickly.

Now my plan is to work my way through How to Design Programs and Structure and Implementation of Computer Programs; hopefully I'll learn something in the process. Either way, I'm sure it will be an interesting journey into the world of functional programming and Lisp-like languages. Hopefully this will prepare me for a closer look at more advanced languages such as Haskell or Erlang in the future.

Wednesday, March 14, 2012

Free market ticket sales, the only real way to kill scalpers?

It seems to be all over the news at the moment; new legislation and ticket systems have done nothing to abate scalping of tickets for concerts and events in New Zealand and all over the world.

Scalpers depend on the supply/demand characteristics of a market being tipped far in favour of demand. What if the demand is removed? Ticket vendors and event organisers have tried to implement this with threats that tickets found to be onsold will not be honoured. The logic behind this is that if people don't buy tickets from scalpers, there will be no market for the scalpers to exploit. The failure on the part of the organisers and vendors is that this implementation of that logic is fatally flawed; good honest people still want to buy tickets; and because there is a finite supply of said tickets, they have to buy from the only source available: scalpers. This has been somewhat successful in decreasing the demand in the scalping market, but it has also meant that some people have had to make the conscious decision not to buy tickets for an event that they want to go to.

My theory is that the only way to truly reduce the demand for scalped tickets to zero (or anywhere near it) is to allow market demand to dictate ticket prices. Yes, I can hear the horrified screams of bands and music fans everywhere as I say this. What do you mean we should have to pay hundreds of dollars to go to the concert by that band I love?!?! WTFBBQ?!?!

Sure, ticket prices would probably skyrocket immediately. But consider this; instead of the hard-earned money you spend on tickets going to scalpers (if you're unlucky enough to not get in early), the money goes to the promoter, the venue, and of course the performer. For big name bands with huge fan bases this would be a massive win in terms of revenue from concerts. But what about the social implications?

Many bands are known to impose limits on ticket prices on purpose, just so that they can allow their most loyal fans and followers to attend concerts. Allowing only the richest fans to attend a concert will mean the overall culture suffers, right? Suddenly who can go and who cannot is dictated purely by how much money that person is willing to pay. Is this fair? That could be an entire other discussion in itself, and I don't particularly want to get into it here.

My thoughts are less on the implications of this on the followers and attendees of events, and more on the effect it will have for scalpers and bands, and how such a system could be implemented to maximise success for both bands and fans. I want to see less money going to scalpers and more money going to the performers and musicians that people are paying to see.

So how would we implement such a system? The only way scalpers can be defeated completely is if there is no market for the tickets they buy. So how can we do this?

  • Decrease the demand for second-hand tickets - this is the approach currently being tried by governments and organisers.
  • Increase the supply of tickets - this simply can't be done for most shows, especially if you already have the biggest venue in town.
Obviously, the only real option is to reduce the demand for scalped tickets. How can we do this? The approach already being taken is to reduce the demand by increasing the risk to potential buyers. This results in a loosing situation for buyers who just want to see their favourite band when their tickets are invalidated or they are barred from entering a concert.

What if ticket sales are turned into a priority list, ordered by the amount paid, which is completely fluid until tickets are printed at some time before the concert? Suddenly legitimate buyers can simply outbid scalpers. If scalpers try and outbid the legitimate buyers, they will find themselves with tickets on the night of the concert that no one wants to pay more for (because those people have already outbid the scalpers). Of course this system would not eliminate scalpers altogether; there would still be a market for scalped tickets among latecomers who wanted to buy tickets at the last minute. The implementation of such a system would be complex because of all the stakeholders involved. And as I have said, there are significant social issues and business relationships that would have to be sorted through before such a system could be put in place. Not impossible, just the sort of operation that would be suited to a committed and skilled entrepreneur.

The other option of course is to put up with scalpers, and accept them as a natural evolution of the supply/demand market. Either way, people will always have to pay market rates for tickets, whether that market is set by the organizers or the scalpers. There is no escaping the supply/demand curve it seems. That may sound cruel, but it seems that it is the honest truth of selling such limited quantity goods. 

I'm only an engineer, not a marketing or sales person. So by all means, feel free to discuss any flaws or shortcomings in the theories and opinions I have expressed.

Tuesday, February 21, 2012

SVN vs. DVCS

First post for 2012.

I was going to make this a SVN vs. Mercurial post, due to my recent efforts to push forward Mercurial as a replacement for the former at my current workplace. But such a post would miss the fact that what I'm really pushing is a migration away from SVN to a Distributed Version Control System (DVCS). Mercurial is just the choice of DVCS the company I work for has settled on, but there are plenty of other options, each more appropriate for a particular community or purpose. So SVN vs. DVCS it is.

I'll preface this post with a couple of disclaimers. First, yes, some of this is just my opinion. Secondly, SVN isn't terrible; there are still millions of software developers all over the world that use it daily as their version control system of choice (even if most of the hacker/open-source world has migrated away from it already). My argument is just that a DVCS and the workflow associated with it is better.

First, let's start with the word Distributed. Distributed Version Control Systems contain this word as part of their name because of the way they spread the repository out to all users. In SVN you would check out a particular revision and branch/tag of a repository (and that is all that is stored on your local machine); with a DVCS system, each user checks out a full copy of the repository. This means every user has all branches, all tags, and all revisions of all files.

You might think that checking out every tag, branch and revision would be very slow. DVCS systems mostly get around this by being smart about how they store and transmit changes; Git has a fairly complex database and garbage collection/compression system to keep on top of the worst of the repository bloat, while Mercurial only stores file deltas, instead of whole files.

Because of the way the repository is spread out to all users, DVCS systems allow users to commit to the local repository, as well as pushing their changes to a remote host. The biggest change in workflow this creates between SVN and DVCS systems is that it is now possible to commit any change to source control without affecting other users. I feel that this is the most significant benefit of a DVCS system on a day-to-day basis; the ability to commit early and often proves invaluable when it comes to experimentation and large chunks of development work. This has flow-on effects such as improving merge operations: changes are divided up into smaller chunks, and DVCS systems focus on the history of changes between revisions rather than the absolute state of a file at the two revisions being merged.

The workflow in DVCS systems is generally divided up into at least 3 steps:

  • Pull changes from the central server or from another user.
  • Commit changes to the local copy of the repository.
  • Push the changes in the current repository state to the central server or another user.
One of my colleagues at work pointed out a very good downside to this pull-commit-push workflow; what if people simply work the their local version of the repository for a week and something happens to their computer, or the office burns down? The central server is backed up, but individual developer machines aren't. I mumbled something about no one being silly enough to do such a thing, but it raised an interesting point. Personally I think this would be the equivalent of someone working in an SVN respository for a week and not comitting; but DVCS lulls the user into a false sense of security on this front because when files are locally comitted, they're in version control, right? Sure, but this doesn't guarantee data replication and integrity. Logic should dictate that people always make sure they push DVCS code to a server or other user at least once every day or two.

But how do we deal with large chunks of isolated development in a DVCS repository? In SVN we would create a branch, commit to the branch, and then merge the branch back into the main trunk of the repository once we are finished. This process has never been as fluid or seamless as was promised in the early days of SVN, especially on large repositories and changes. In DVCS systems branches and tags are replaced by the concept of labels or tags. Because the repository history is encoded as a string of changes, the head/trunk of the repository is just a pointer to the most recent change in that string of changes. A branch is just a pointer to a different string of changes, which may share those changes with many other branches or tags. This means to create a branch, you simply create a new tag and assign it to the changeset you want to base the branch on, preserving the current head label/tag as it stands. If you need to merge the changes from a branch back into the trunk of a DVCS repository, the repository simply applies file changes in the branch on top of the file changes in the trunk.

Overall I think a DVCS system has huge advantages over a standard SVN workflow. The focus on changes, rather than the instantaneous state of a respository more closely aligns with a software development workflow. The decentralization of the version control systems allows for finer-grained control over commit points and allows users to commit code to source control even if it would normally have a negative affect on other users in an SVN-based version control system. SVN chains users to a repository server, while DVCS systems allow users to be the drivers of information flow.

Don't take my word for it, check out Mercurial, Git, or any of the other awesome DVCS systems out there!