.

Vim vs Rubymine?

I’ve been in the VI/Vim camp for a long long time. When I got started with Unix (I started out on HP-UX – but don’t worry Richard showed me Linux and I’ve never looked back.) you had two choices – VI and pico. Pico was a simple text editor. If memory serves it was developed to be the text editor with the pine email system. As a consequence, it had one feature that could make it a nightmare – it automatically wrapped lines. (as pointed out by Richard you can avoid this fate by remembering to add a -W. It didn’t help that for licensing reasons – there was/is no pico for Linux. They have nano now – but by the time that was standard I’d paid my dues) That’s pretty desirable for email, but terrible if you are writing code or a config file. I ended up learning the basics of VI.

I have a friend or two who swear by emacs. It was has always been on my list of things to spend a week really learning, but I’ve just never gotten there. The reason is simple. Every Linux box ships with Vim. It is right there waiting for you. (Ubuntu’s default is a little crippled but I’ll get to that). So if you end up modifying files on remote machines (which was once a regular thing for me) it is an essential skill.

If you are going to use Vim – there are a couple of things that I couldn’t survive without. Namely splitting and search & replace. Splitting is easy to explain – you are able to split the terminal in half (or less depending on how many things you keep open at once). You can even split the file you are in so you can look at two different parts of the same file. That ends up being pretty useful when you are dealing with code, or you are looking at a config file that should be similar to another block in the same file. Keyboard command is

ESC :sp

If you need to switch between “windows” there are a bunch of ways (like everything else in Vim) but you can stick to Ctrl-W to rotate through them.

Several of the people I’ve worked with complained most about my constant opening files, splitting, making changes, and then closing files before they knew what files I was opening let alone what I was changing. Once you get the hang of it, it is awesome. Unless of course you are on an Ubuntu box that has vim-tiny. Vim-tiny doesn’t support splitting. It drives me insane and is the first thing I upgrade when I touch a box.

Which leads to other essential skill – search and replace. There are probably books written on this subject. I won’t repeat, but I will show you some things that I end up doing a lot.


%s/\(.*\)/mv \1 \1/g

That says for all lines (%s – s means a single line) take the contents of the line and replace it with mv (the contents of the line) (the contents of the line). I end up using this to build a shell script to rename files en mass. You can do more than this – but it gives you a taste. And if editing a single file isn’t enough you can do the old


vi `grep -rl pattern *'
:argdo %s/pattern/new_pattern/ge |update

That builds up a list of files recursively and throws it into Vim. Then the argdo lets you do a search and replace across all the files. You can do something similar with :args but I can never remember how to make it recursive.

Over the years I’ve programmed in a lot of different languages. I always seem to move on to the next one just as the tools for the one I’m in start to get good. So I’ve never really used an IDE (Integrated Developer Environment) as part of my work. I couldn’t afford them when I did C/C++. They didn’t exist for PHP or Python when I wrote in them. The Ruby on Rails community was completely dominated by Textmate (and the mac fanboyism that went with it). But since then things have improved a lot. There are even choices now. I originally got exposed to RubyMine when the developers on the team were starting to move away from Textmate. They seemed to like it. I didn’t think much of it since I had Vim. Then later when I was working as a developer on TruckingOffice in a pair, I got an ultimatum. I had to use a tool in common with the other developers or I wasn’t going to get to stay in the pit. I guess I split files one too many times. I reluctantly switched over to RubyMine. As time has passed, I’ve come to really like it. I have even been playing with the community edition of IntelliJ since I’m experimenting with Android development.

It has more features than I could shake a stick at, but there are two that have become essential to my sanity. The first is focus tests. You can point at a single example in a spec file and have it run. This saves me a bunch of time when I’m working on adding a feature or fixing a bug. At one point, I spent a week learning the internals of Vim enough that I could add a plugin to do the same thing. I got something to work, but it is no where near as elegant. Especially because of the other feature that I’m hooked on – debugging.

In the past, debugging to me meant adding printf/echo/puts to the file that was causing the problem. Rubymine has an excellent debugger integrated in. And if a test fails, you can easily add a break point and switch from running the spec to debugging it. Between RubyMine’s debugger and Chrome’s debugger, I’m left wondering how I programmed so long blindly.

I would love to say that now I do everything in RubyMine. I guess old habits die hard. I do all my coding, and testing there. I still end up with terminals running tasks (which I know RubyMine can do) and opening files from time to time in Vim because in some ways it is just how I think about it (even though RUbymine has enough keyboard shortcuts to satisfy a six armed man).

So if you are a system administrator – get learning Vim. Add a new command a day and in ten years you’ll have mastered it. If you are Ruby dev, I highly recommend RubyMine. I’ve resisted a lot of changes to how I develop, but this one is worth the learning curve.


From a friend who cares about me….

Saw your post on playing with *cough* Java *cough* .

You may be interested in Rhomobile. Haven’t played with it myself
’cause my phone is dumb, but if it helps you stay out of Java-land, it
may be worth a look:

From their website:

We at Rhomobile are big fans of both Ruby on Rails and Ruby. We’ve all developed quite a few applications, including mobile web applications, in Ruby. But we believe that the best mobile applications execute natively against local data, as opposed to connecting to remote websites. Rhodes allows you to build such applications easily, writing the app one time in a way such that it will run natively on all major smartphone operating systems: iPhone, Windows Mobile, BlackBerries and Symbian.

Rhodes is essentially a combination of:

* a minimalist Ruby implementation for mobile devices
* a Model-View-Controller “microframework” that consists of a directory structure and file naming convention. Writing to this framework primarily consists of editing a set of ERB templates for creating HTML to display data
* an application generator called RhoGen which generates a basic Create-Read-Update-Delete controller and associated views for specified data objects
* the RhoSync synchronization engine client
* a simple object relational manager (ORM) called Rhom
* a web server which is installed on the mobile device and runs locally

All of Rhodes fits in around 2 megabytes of local storage on the mobile device. In this article we’ll describe how to use Rhodes to provide a mobile interface to your backend application. The first step of this is to provide a way to get data of interest available on your mobile device.

Sounds like something I need to try out….


Different Perspective

From the Erlang List

The funny thing is that the OOP world have found one way to manage the
complexity and the code-bases that grow ugly: They are using unit
tests, and practice the art of writing testable code. (“Testable code”
is something that is simple to write unit tests for. )

* You make it very easy to supply the dependencies to code under test
need. Avoid things that go out and grab values from global state.
* You avoid side-effects, and the side-effects you need (database
updates, writing files etc) you make sure that you perform with a
layer of indirection so that during tests you can replace the real
object with (for example) a do-nothing mock.

What they are doing is that they are making as much of their code as
possible to be side-effect free and placing all that code in one
method so it can be called from a unit test. They are concentrating
side-effects to well-defined places, carefully avoiding mixing
side-effects and testable/test-worthy logic.

What they are doing is that they’re reinventing functional programming.


What Language Should I Learn?

I recently spoke at the Trinity University President’s Dinner. It was an evening to honor the university and the many people who have helped make sure that it will continue to thrive. I gave a very short talk about a moment in college that could have changed the outcome of my entire life. (I’ll tell that story again some other time).

Afterwards, a couple of students came up to me with the same question – “What Language Should I Learn?” In my mind, it generated a lot more questions. What problem to you want to solve? What do you like to work on? What kind of company do you want to work for? I didn’t think that’s what they wanted. They wanted an answer. I think of the moment in Real Genuis when the professor starts saying something and the students pull out their note pads to capture his profound thought. It ends up being not very profound. I don’t think my answer was either.

My very very first language was BASIC on the TI-99a. I don’t remember what I did in that language other than that my brother figured out how to generate tones – so you could make music of a sort (more like a string of tones).

I moved on to Logo. I only remember it vaguely – but I always think back to the idea of the “turtle” on the screen and how you had to tell him what to do. Later, once I learned to program, I would always be amazed at how accessible they had made programming. When I started learning C, it seemed like it would be forever before I could draw something on the screen.

Next was an odd diversion – GFA Basic. I learned it on my Atari 1040ST (That meant it had a whole Meg of RAM if you can imagine). That was really where I learned what programming was in a real sense. I really learned to love programming. I had forgotten until I looked it up in Wikipedia that the IDE that it shipped with supported both code folding and a graphic designer you could use to build menus and windows for the Atari OS. I also spent some time refining my skills at the Midwest Computer Camp (if you can imagine mixing a classic summer camp with programming and soldering classes you’ve got it just about nailed). The main things I wrote – that stick in my mind now. I wrote a really complicated program to auto generate full characters and NPCs for AD&D. I even added support for character classes from outside the core rules that were introduced in Dragon Magazine. I was friends with a professor at the school my father taught at, and he talked about a lot of other technology. I can remember wanting to learn C – but it was both cryptic and expensive to get started so I stuck to Basic.

At camp, I sat next to a 10 year old who was managing sprites and calculating collisions. I was jealous and wanted (like many programmers before me) to start working on a game. My skills being pretty limited – I started with a text based adventure. It didn’t get very far. But then I became hooked on the show Remote Control on MTV. It was right up my alley – quirky and filled with pop culture trivia. I partnered up with a friend and started working on our own computer version of the game. He delivered a lot of the art. I handled most of the programming. I realize now that it was a heck of a lot harder to write than it should of been – but back then I didn’t have the internet or a relational database you could jam stuff into. I remember for load time we ended up putting all the graphics in a single file and then using the code to cut out sprites that I could draw on demand. It had the basic model of the game working before I moved on to other things.

I moved to Texas to start college at Trinity. I started out as a chemistry major (I didn’t realize that it would be a long time before you were allowed to blow things up so I moved on). I got a VAX account as a normal student. I didn’t show up with a computer so I spent a lot of time in the Mac Lab (Plus my roommate and suitemate both had Macs, and I’d never really liked the PC – DOS seemed so primitive compared to the GEM OS of Atari). I got a crash course in the Internet – too much IRC and Bolo. I can remember trying to get an IRC client installed on the VAX so I wouldn’t have to log out.

This is the period where I got to learn C at last. College got me started with formal programming in the most academic sense. Meaning I learned C, C++, and Scheme ( That last one was a language wasn’t able to appreciate at the time – but it’s based on Lisp so what do you expect). I just remember writing a whole bunch of stuff that later I would learn you never write by hand (you see there are these things called standard libraries). Now I realize I was learning the fundamentals – but it funny to think about how ill prepared I was for the actual task of programming once I started doing it for real. I hope they teach people about source code control now. I still remember wiping out weeks of work and calling Edwin frantically to apologize. (He was always more prepared than me – he had a backup snapshot taken every 24 hours).

College is also the time I first discovered Python. The very first Python program I ever ran was a script that was able to tap into the Mac’s text to speech engine. You could feed it a script and it would read all the parts in different voices. The sample text was Monty Python’s Holy Grail. It was basically as annoying as you can imagine – but it was light years ahead of the things I could do in C at the time so I was seriously impressed. It also ran on the Mac – which at the time I didn’t have any dev tools for.

I think I finally figured out that I was more suited to the Computer Science department than the Chemistry department when I realized I spent all my time in chemistry class programming my TI-Calculator. I had a very simplistic version of Kaboom!. I also wrote more practical stuff – like a program that would solve titration and give you the work that you needed to show how you got your number. I had a terrible moment when I was running to class and tripped. The calculator skidded across the sidewalk and popped out a battery. That wiped out all of my code (I didn’t have a backup schedule for my calculator).

I still chuckle thinking about all the web stuff I wrote in C and C++. I can remember actually embedding HTML into the C code. I’m amazed that I stuck it out. I think this is one of the main reasons I’ve avoided compiled code like the plague. Not because it is inherently bad – but because I used it so poorly it will always scar me.

I spent a summer learning Perl 4 – since it seemed to be the language of system administrators – and heck of a lot more friendly to the programmer than C was. Later, I would learn that Perl can easily become a read once language. I spent a chunk of my summer writing a shopping cart system from scratch to sell gold jewelry in Dallas,Texas (which I didn’t get a contract for and was never paid for – lesson learned). Since I didn’t actually get paid – I’m not sure if that counts as professional work or not.

The rest of my history of development would be my “professional” career. I’ll tell the sorted tale of ridiculous amounts of time poured into an e-commerce system that never really went anywhere.

Time passes. I learn what programming means – plus how to be on a team – sort of. I also managed to spend more time than I can believe in PHP. (There are a couple of functions in the core that I contributed related to obscure PostgreSQL functionality).

I’ve spent a big chunk of the last 4 years programming in Ruby. Sure I meandered into Erlang (meaning I got to Chapter 4 in the book – variables that don’t vary – WTF?). I started looking at Clojure (mostly because I still aspire to be smart enough to think in Lisp – someday). I learned more JavaScript than you can shake a stick at (which I find oddly satisfying to write if it weren’t for the browser). I picked up ActionScript (think JavaScript with all the parts that make it good omitted and about 1 ton of Java mentality slapped on top). I learned about programming for VIM ( I even got focused tests to work just like TextMate – but have since just switched back to autotest). I learned Processing both because it seemed like it might come in handy for visualizing data and because it is the language of choice to program the Arduino.

What does all that mean – and how the hell do I answer the question?

My answer was simple – and I’m hoping most programmers will agree with it.

Learn one language really well. The better you know the first one the easier it will be to move the second because you’ll have a lot of the concepts. As soon as you think you know your first, start looking to pick a second.

I hope they think about it, but I wish I had added the reason for this. I’ve used an arsenal of languages in my brief career as a programmer. Most of the programmers I look up to have – heck even the people who write languages have always learned multiple languages ( Especially since I love interpreted languages and almost all of them resolve out to C at some point). The idea that you can actually be a “Java”/”C++”/”Your Language Here” programmer and stay in that language exclusively forever – means that you have a tremendous amount of focus, a lot of job security, or are doing a job instead of a career.

Besides – the joy of programming is the power to express ideas. Different languages enable different forms of expression. You’re kind of cheating yourself out of a lot if you try to stick to a single language.


A Lesson From Leo

Happy families are all alike; every unhappy family is unhappy in its own way. – Leo Tolstoy

It is funny – If I look back a year ago I had two very big software projects (since I can’t say what they were we’ll call one Y and the X). Y went really well, and by all accounts is a big success! What about X – not so much. It is slowly turning out that the second project is going to have a much more lasting effect on my design and thought processes than the first. I mean I haven’t worked directly on X in months – it is still teaching me lessons. Proving not only that I have a lot to learn – but also that your mistakes can teach you a lot more than your successes.

I recently gave a presentation on a number of lessons I learned from the project ( A couple of those ideas I’m hoping to sit down and actually write out for posterity – and hopefully as a reminder so I don’t repeat them). But just yesterday, I got another lesson out of X! It really is the gift that keeps on giving.

My World

First, let me say that my development world is fairly constrained. Although I have worked on some seriously scaling infrastructure – by and large my focus isn’t on those kind of problems. I always end up focusing on really complex domains. I largely credit one book with really having an impact – Domain Driven Design. I first read it back in 2004. It does what a great book of this should do – it introduced me to great new ideas and also confirmed things I was already doing (plus giving me a language to finally describe it).

I make this point – because it means I spend a lot more thinking about what if represented in a given system (and how it is represented) more than I do about about many transactions the database can handle. Because of that – the details of the domain are seriously important.

Actual vs Logical

Let’s talk about something that is totally outside of the domain I work in. How about cars?

I have one – it has the best feature of all the cars on the road – it is paid off! (I highly recommend that as a feature).

I have VW Beetle.

For most circumstances – that is plenty of information. Meaning – I have a car. If you want to know if I can go some place – that is probably enough information.

When it comes time to go to lunch – we have to figure out who is going to drive. My car can fit 4 people. That detail is important depending on how many people want to go.

If I want to find out how much my car is worth on Kelley Blue Book I have to add the details that it is a 2000 GLT Turbo (plus lot of other details – 5 speed, seat warmers, etc).

The point of this is – if you asked me about my car – what you want to know about it is driven by the problem you are trying to solve by asking the question. If you ask:

Can you get to work?

Yes, I have a car. (Totally sufficient answer)

Yes, I have a 2000 VW New Beetle GLS Turbo Hatchback 2D with a 5 speed manual and seat warmers. (Is a bit nutty)

I’ll say that this is a scale from the logical to the actual. Logical being a sort of encapsulation of details that provides very limited information to the actual which has a ton of details.

Some examples:

Logical Actual
Large Coke 24oz Wax Coated Paper Cup – Coke with 35% ice , lid – no straw
31D 31D on an American 777-200 Ver. 2 – which means it is an awesome seat
Some Chips Corn, Vegetable Oil (Contains One or More of the Following: Corn, Soybean or Sunflower Oil), Buttermilk Solids, Salt, Tomato Powder, Partially Hydrogenated Soybean Oil, Corn Syrup Solids, Corn Starch, Whey, Onion Powder, Garlic Powder, Monosodium Glutamate, Cheddar Cheese (Cultured Milk, Salt, Enzymes), Nonfat Milk Solids, Sugar, Dextrose, Malic Acid, Sodium Caseinate, Sodium Acetate, Artificial Color (Including Red 40, Blue 1, Yellow 5), Spice, Natural and Artificial Flavor, Sodium Citrate, Disodium Inosinate, and Disodium Guanylate.source

I realized one of the problems with Project X was exactly this problem. Namely that it collected enough information to drown an actuarial. (Amazingly there this an entire site devoted to jokes about actuarials – www.actuarialjokes.com).

Why did it collect so much detail?

Data Pack Rat

I’m a 5th generation pack rat in the real world (I’ve been through some counseling and my wife has helped a lot), but I’m ten times worse in the digital realm. I collect data – of all sorts. This inevitably bleeds over to the things I work on.

Solving The Hard Problems

That was not the real reason – even if it is a convenient answer. The real reason is that the really tough problems in the domain for X required the data. Sure the day to day stuff only needed to know the basics – but the hard problems are hard precisely because they depend on knowing the details involved. This is the same reason that Kelley Blue Book needs so much detail about my car – it is hard to give an accurate estimate if you don’t know what you are estimating.

CSI & The Infinite Zoom

This is the other part of the reason why getting the actual can be so important. On CSI, they strive for what seems like scientific accuracy (One can only assume as a layperson). That being said – there is one situation where all the rules are thrown out. They love to zoom into pictures infinitely – even when they are digital. The worst case was the time they used footage from a security camera to get a picture of someone’s eye – and then look at the reflection to catch the killer. Going to have to get one of these futuristic security cameras – since everyone I’ve ever seen is seriously blurry.

What is the point of that little rant? Simple, if you don’t have the resolution you can’t just make it up. The same rules apply. If you only collect the make and model of cars (like they did to get my parking tag) you can’t really say how much it would be to replace that vehicle. If that is your goal – you’re screwed. My solution is typically to think about all the data we are going to need and work backward.
This doesn’t always work out – Project X being an example.

The Thin Vertical Slice

There is another complication in this whole process. We do a lot of Thin Vertical Slicing. It is incredibly useful since it means we get better feed back from the users and it keeps us from building a lot of stuff we don’t end up needing. The down side is that when you are dealing with a lot of data that has to be collected (i.e. a world wide audit of something) – you have to decide – do you collect what you need now – or while you are there – do you grab as much as you can while you are there. Once you grab it – what do you do with it. Since if you track it but don’t include it in the tool you roll out – how do you keep the data up to date? If you ignore it – you’re going to be back to audit the rest eventually (or maybe you won’t? YAGAI (You Aren’t Gonna Audit It) a subset of YAGNI)

Matching The User

This is the real crux of the issue. Both Project X & Y obsessed over the actual. Project Y was a success – X was not. Why not? At the end of the day, a simple answer – matching the domain data to the user.

In the case of Project Y, the users of the system were also obsessed with the actual data. The tool let them manage the data down to the smallest detail. Which made them more accurate and more effective. They felt no sense of overload because they were steeped in the details as part of the job. The tool just made it easier to switch to a new task. Awesome!

In the case of Project X, the users of the system didn’t care as much about the actual data. They just needed a couple of logical collections to make their decisions. The tool forced them to wade through so much data to get something done. Which meant when you were doing something very very complicated it was great – but most of the time things weren’t complicated – which means it sucked.

The lesson here is that we could have had both. We could have had the detail underneath (because we actually needed it to solve problems), but exposed less of it unless you wanted to dive in. We actually moved this way at the end of the project (some what unconsciously) – but by that time – other issues with the system were in the way of success.

Final Thoughts: So basically – keep in mind the problem you are trying to solve and the data required to do it, but overwhelm your users with that data at your own risk!

p.s. There is probably a corollary to this idea relating to integrating applications via services. I haven’t finished this idea – but its probably along these lines (Almost stolen verbatim from a friend). Namely – that when you are consuming services – focus on what you need – and nothing more. Meaning :

  • Don’t validate data from the service you don’t care about. Since you don’t want to throw out a response because it includes stuff that doesn’t effect you
  • Don’t ask for more data than you need – since you’re wasting resources and confusing the issue on what data you actually care about

Both things are part of the process of keeping the services from becoming to intertwined, but still manage to loop back to this idea of getting the detail level right.


Simple Mistake – Almost Triggers A Lot Of Work

Currently I’m grinding through the process of translating my old unit tests on a project into much nicer Rspec tests. Over all it has been a good experience. Especially since I caught some things in the process that the old tests didn’t cover.

So I started on new controller and I wrote something like this:

describe "My Cool Test" do
before(:each) do
@obj = String.new
puts "Hello"
@obj.should_receive(:stupid_method).once
end
it "should do something"
it "should do something else"
it "should not do this"
end

The object never calls stupid_method so it should fail the expectation right?

Well it turns out that although the begin is executed – because all of the examples are pending! If you run the above code you’ll see “Hello” several times – but no error.

Now add in the following example:

it "should be empty" do
@obj.should == ""
end

You’ll get:

Spec::Mocks::MockExpectationError in ' My Cool Test should be empty'
Mock 'String' expected :stupid_method with (any args) once, but received it 0 times

So there you go – (And don’t worry I submitted a bug)

You may be wondering why I said triggers a lot of work – well I’m currently using the Rspec mock library. I was very close to switching to mocha – which would have meant modifying a heck of a lot of tests.


Getting Pownd Suxxors!

Ok I’ve been busy lately (by lately I mean since 2.2 came out). I got a note a while ago from a friend about a problem with my RSS feed – didn’t think much about it. Then today I got an email from another friend who is trying our firefox 3 – it has a plugin that warns you when you visit a site that has malware. Apparently, my site is listed as a malware site…..

It turns out there was a hole in WordPress at some point that let them inject javascript and whatnot into the posts. They were hiding a bunch of it – but it showed up in the RSS.

I think I’ve cleaned out the bad posts (If you see one let me know), I’ve cleared out the back log of comments (all 25K of them), and updated to 2.5.1 – which is the latest and greatest – oh and it has security fixes for 2.5 (just released recently).

The only annoying thing is that every time I do this I have to deal with putting back in my patches to WordPress. I’m going to to give git a whril on this problem to see if it helps.

Also I noticed that some people are having problems with Postie on 2.5.x – I didn’t notice because I wasn’t running 2.5 – but since I am now – I’ll be fixing whatever the main problem is since I still need Postie to work.


Getting The Hang Of Git

I signed up for GitHub (More for the tracking than anything) – I still had to sort out a git repo for the code I don’t want out there in the wild.

Turned out to be pretty straight forward ( Good Instructions or Even better ones if you like packages – Another Guide)

Looks like gitosis was really the key. The only problem I ran into was my main server is still a Debian sarge box – ( Yes I know that’s old. Yes it annoys the crap out of me since every other box I deal with is actually an Ubuntu box. Yes I’m waiting for Hardy Haron to make a clean start of it.) I had to pull in a lot of packages from either etch or testing (1.5.4 is in testing). Not ideal but it works for now.

I actually spent the last three days in the UK and while I was there I had very limited internet connectivity. No big deal – I did a bunch of work in a git repo locally. Now that I’m back home I’d prefer to make sure that the code isn’t just on my laptop (Backups are nice – so is my home workstation). I was able to easily add the new repo to my new git server. No problem.

Example from the above articles:
mkdir free_monkey
git init
git remote add origin git@YOUR_SERVER_HOSTNAME:free_monkey.git
Add in some files - blank repos are no longer pushable
git push origin master:refs/heads/master

On my workstation I just did
git clone git@YOUR_SERVER_HOSTNAME:free_monkey.git
cd free_monkey

If I need to add a new repo – or a new user – I can just

git clone git@YOUR_SERVER_HOSTNAME:gitosis-admin.git
cd gitosis-admin

Very nice!

Then I hit a small wall – basically I was in the middle of some feature work. I had two different branches off of trunk – one related to a new feature that is still a day away from being finished – and a strain of development related to migrating the app to Rails 2.0.x. It wasn’t obvious what the hell to do. Basically you have local branches and remote branches. I’ve obviously master local branches – now how the hell do I get them to be remote.

Then I found this overall guide Git Guide – SourceMage Wiki

on my laptop (with the branches – and after I had commited master)

git co rails-2.0.x
git push origin rails-2.0.x:refs/heads/rails-2.0.x

on my workstation (update/get list/create local branch)
git pull
git branch -r
git branch --track rails-2.0.x origin/rails-2.0.x

(Apparently if I ever need to delete the remote branch I just do this)

git branch -r -d origin/rails-2.0.x

This is all a little bit confusing – but I’m sure once I get the hang of it I’ll end up adding some global aliases to make like easier. All this power and no ability to replicate the functionality of svn:externals – the mind boggles.


Doomed To Look Like A Fanboy…

I saw the announcement this morning that Rails is moving their development to Git from Subversion. I’ve been toying with Git for about 6 months but haven’t made the switch because there isn’t a tool that supports externals the way I use it at work.

In the last few days, I’ve been poking François Beausoleil, the author of Piston, to get more of my external goodness sorted out. This would allow me to spend my days working in Git while the rest of the team I work with stays in SVN – which is just about the perfect solution.

So far it still has its bumps – but I’m optimistic.

That being said I wasn’t switching to Git because DHH said to. I was more moved by the video of Linus at Google.

The funny thing about that is a long long time ago – I used bit keeper at work. Linus used bit keeper to maintain the kernel. I hated bitkeeper! At the time, the argument was constantly – it must be good – they use it to maintain the kernel – which was a crap argument considering there were only 6 of us and we all sat in the same room – we weren’t a distributed team of thousands. I eventually got us to switch to Subversion (Which had the nice benefit of saving my dept $1500/seat a year for scm and got us shiny new laptops) and things were good.

When I started seeing grumblings about Git a year ago – I was prepared to ignore it. It seemed like it was going to be bitkeeper all over again – meaning a great solution for Linus – and a bad one for me. I’ve been watching for signs that I could keep my externals workflow. François even posted on my blog to give me a heads up about the new – not quite released version of his tool.

Last night I got one of my project migrated so I can use either SVN or Git on it. I figure after a few weeks of working I’ll know if Git is what it is cracked up to be – and if it isn’t I can go back to the land of SVN. I remember what it was like back in the day when I went from CVS to SVN. I loved the improvements in SVN and hated the fact that there was zero tool support. That eventually settled down – here’s hoping that the path is much shorter for Git.


JavaScript – So Close And Yet….

I’m back in the JavaScript trenches. I continue to be amazed at how flexible the language is – once you really dig into it. That’s the good news – but then comes the bad news.

I’ve grown very used to a number of the great features of Ruby. To be completely honest in a number of cases I didn’t really even think about them being there – because Rails uses them for incredibly great programmer outcomes. Now that I’m in JavaScript and they are missing – the sadness is ensuing.

An easy example of what I’m talking about:

person.name = “Bob Roberts”
person.name #Bob Roberts

On the Ruby side that can actually be implemented as a set of methods on the object.

class Person
  def name()
     return(@name)
  end
  def name=(name)
    @name = name
  end
end

Ruby hides the method part from you – which is nice. If you wanted to get a little crazy you could do it with a method missing call (This is not the way to do it but I’m making a point here – stick with me).

class Person
  def method_mission(m, *args)
    if  m == "name"
     if args[0]
       @name = args[0]
       return true
     else
      return(@name)
    end
  end
end

Not pretty – but very very powerful. I’ve run into some really cool articles on meta programming in JavaScript – so I was hoping there was a way to emulate this kind of functionality in Javascript. Turns out – there isn’t. Then I ran into a presentation from Jon Resig (Author of jQuery and JavaScript Bad Ass). If you check out his recent presentation on ECMAScript 4 (You can skip ahead to slide #13 & 14) You’ll see that they are specially adding the features I’m talking about to the new ECMAScript 4 (AKA JavaScript 2.0). There are some other neat things in there (there are also some other crazy things that make it feel much like the time I watch PHP4 convert to PHP5 and it felt like they were adding things based purely on emulating other languages).

That’s the good news – the bad news… FireFox 3 (which is currently in beta) will only support JavaScript 1.8. The only reference to a date in the presentation is 2008-2009. I found this blog post that says they are hoping to firm it all up by October of 2008.

Guess that means I’m not going to be waiting for it – since I actually have stuff to get done right now (almost literally – I have a code demo due on Monday for a library I’m working on). At least I know that the renaissance I’ve seen in the JavaScript world is really just getting started.

*Side Note* – To give you an idea of how not a baked cake all of this thinking is – here is a PDF that shows some concerns over some of the features that are possibly going to be included.


    Stuff I want to read

    Shelfari: Book reviews on your book blog

    Stuff I've Read

    Shelfari: Book reviews on your book blog
    You are currently browsing the Economy Size Geek weblog archives for the 'Programming' category.
    Categories
    Archives

    .