Save the date! On Thursday, November 25, 2010 in Düsseldorf Open Space CloudCamp un-conference is taking place. The main theme of the event is Cloud Computing as you may guess And the Open Space format guarantees a lot of fun! The camp starts at 13:00 and will be kindly hosted by employer – MT AG in Ratingen (map). Thanks to MT AG the event is completely free for attendees.
The agenda is not carved in stone yet, but will be something like this:
13:00 Registration, networking
13:30-18:00 TBD (Keynote, lightning talks, open space sessions)
So, if you are near Düsseldorf on November, 25 and are interested in Cloud Computing (no matter which vendor or technology you prefer), I really don’t understand why still aren’t you on the attendee list?! An Open Space event is only successful if enough passionate people attend it, please help us make Cloud Camp a success. Register at http://cloudcamp.org/dusseldorf.
And we are waiting for your lightning talk proposals. Mail me: sergey.shishkin [ at ] mt-ag.com.
Update [22.11.2010]: CloudCamp Düsseldorf is postponed to undefined date due to the lack of registrations. Sorry.
Envy me, I spent last weekend in Karlsruhe with some of Germany’s smartest and most passionate .NET developers. Big thanks for that to organizers of the .NET Open Space Süd and all the participants. I feel inspired and am full of ideas again.
The anti-conference started for me with a great functional programming session with @sforkmann introducing Monads. The topic was so great that immediately after the session many attendees started implementing the Maybe monad in C#. And monads accompanied us all the weekend in all sort of jokes
Two sessions were focused on software specifications, BDD, and ATDD. I was representing ATDD and FitNesse camp, however to be more effective I should bring more hands-on examples to break through code-centric developer heads Anyways, the “BDD Shootout” session was really inspiring and code-intense. Not a big surprise with people like @agross (MSpec), @sforkmann (NaturalSpec), and @BjoernRochel (XUnit.BDDExtensions).Although the guys were skeptical about a comparison session at first, the session was a real success: win-win-win for all three frameworks
NOS Süd finished with a Coding Dojo facilitated by @ilkerde. After that dojo (my 3rd only) I still have some mixed feelings. I was looking for learning any new design/coding practices but the only major thing I learned was the importance and complexity of selecting members for a development team. After a while it’s not fun anymore to argue about whether we do TDD or BDUF… Still it was very good experience for me, and I think for others who took part. Thank you Ilker!
If Open Space is so great because of the “coffee break”-like experience, imagine the coffee breaks at Open Space! Of course the biggest part of NOS Süd (at least for me) were it’s parties and breaks when I chatted with many bright and passionate folks and it was the most inspiring and enjoyable experience of the weekend!
Another great .NET community event has taken place on last Friday – DotNet Cologne 2010. Big thanks to all the attendees, speakers, sponsors and organizers for making it happen. The event was a huge success. And I even got a chance to speak there about improvements and novelties of Windows Communication Foundation 4.0.
To make it more fun and educating at the same time I decided to do an experiment: use Git–a distributed version control system–in combination with live coding. I took quite a fast coding pace aimed for experienced WCF developers, showing them what’s really new about WCF4. But to make the code samples more accessible for the beginners and to make my coding “traceable” I committed each exercise to a local Git repository right during my presentation and pushed them all in the cloud to GitHub.
A local Git repository makes it really easy to save your coding progress, while code hosting platform in the cloud like GitHub allows you to share and collaborate on your code with others. So now anybody can review the commits history of my live WCF4 demo and easily grasp for example what it takes to call a dynamically discovered service via a generic channel factory.
I liked the experiment myself and also got some positive feedback from the audience regarding the usage of Git. On the downside I ran out of time and had to leave a couple of interesting demos aside, though not because of Git but poor time planning. Lessons learned, promise to improve next time Join me on GitHub. Your feedback is always welcome!
Last time I wrote some GUI code with time dependencies (like refresh data every 10 seconds), I used to extract the timer into an interface with an event and inject that dependency in the presenter/controller/view model. That way the UI logic is kept testable because I can swap the dependency with a test double which triggers the event manually.
Now I think I have a more elegant solution – an Observable from the Reactive Extensions Framework. Take a look at the ViewModel:
1 public class SomeViewModel
3 public SomeViewModel(IObservable<Unit> timer)
5 timer.Subscribe(x => UpdateUI());
8 private void UpdateUI()
Thanks to the generic IObservable<> I can spare an interface. And this is the dependency injection code (no DI-Container in use for simplicity):
1 var timer = Observable
3 .Select(x => new Unit())
6 var model = new SomeViewModel(timer);
The Interval method creates an infinite sequence of events triggered every 10 seconds (line 2). The generated sequence is typed as IObservable<long>, so I need to convert it to a “typeless” IObservale<Unit> in line 3. Unit type comes from functional languages and is supposed to be a typed void. Last thing to do is to tell Rx Framework to raise events on the GUI thread with ObserveOnDispatcher (there is an overloaded ObserveOn method as well). By default events are raised on a thread pool’s thread.
And here is the code to put in a test:
1 var fake = new Subject<Unit>();
3 var model = new SomeViewModel(fake);
5 fake.Publish(new Unit());
Very simple as well. Subject is a very useful generic class which implements both the IObservable<> and IObserver<>, so you can pass an instance of Subject to a method expecting IObservable<> and then Publish into that sequence thanks to the IObserver<> implementation.
So, if think to write an interface with an event, that you want to pass somewhere as a dependency, think again. Maybe the generic IObservable<> interface will do the trick much simpler. Moreover, client code can leverage the full API of Rx provided through extension methods to IObservable<>, so that the client can for example throttle events or combine them in many different ways. Rx is very powerful.
As the next step I’m thinking of abstracting a clock as IEnumerable<DateTime> instead of writing a custom IClock interface or using a global variable like MyTime.Now or much worse DateTime.Now.
What do you think about time and timer dependencies?
I have already written about a change I was part of last year. Being a coach after two years of product development made me realize what my true passion in software development is. It’s continuous improvement – Kaizen. While being in a product development team you have to measure your pace with the team’s pace and not to run too far ahead for too long – the team should be able to keep up. And you are bound to the technologies of a particular application type – no WPF for a web-service app etc.
After long thoughts I finally decided to go in consulting. WPF wasn’t the main reason, of course I just plan to meet different projects and different teams. This is what should bring me my Kaizen pace, at least in my beliefs. So, since April I’m part of Managing Technology team, looking forward to new projects and new experiences.
And I’m on twitter now
How big is .NET 3.5 runtime? The full redistributable package for x86, x64 and ia64 platforms including 2.0 and 3.0 is about 200 MB. Not a single user needs to download and install it all: somebody already has 2.0 or maybe even 3.0, and nobody is going to install all the supported architectures (x86 etc.) on a single machine. So this huge download is only intended to be distributed on a CD or DVD with applications requiring .NET, when you don’t know in advance what platform you are targeting. If you do know the user’s platform in advance, you can downstrip the package down to some 20-60 MB. Which is pretty good.
Anyways .NET Framework has to be installed using Windows Installer, and it’s a quite invasive way of deploying an application. Can somebody XCOPY-install .NET Framework? I suppose it to be very tricky if possible at all. And we have those copyrights and license agreements saying that Windows Installer is the only legal way to deploy .NET runtime to users.
All this problems might seem illusive, since Microsoft is pushing .NET Framework through Windows Update and starting from Windows XP SP3 and Vista RTM users get at least .NET 2.0 installed. But the e-health market, my company works in, is very conservative (at least in Germany) and users are still running Windows 2000 with windows apps emulating DOS-like GUI. It is horrible! We can not rely on any Windows Updates or even a live and fast internet connection to run the .NET Framework setup bootstrapper. Moreover, our software, a web service, should be deployed as an integrated component into some third party application (almost definitely not a .NET one). And the more complex our deployment story is, the less partners would want to embed our software into their products.
Are there any alternatives out there?
One can turn to virtualization solutions. But instead of hardware virtualization, one can virtualize the .NET runtime. Using Xenocode Postbuild it is possible to compile a managed application into an unmanaged one with embedded fully functioning .NET runtime. The size of an app starts from 11 MB for a simple Hello World App and is from 40 MB in a real world scenario (without deleting unused .NET code, so that reflection can work properly). And this is pretty good, although not cheap.
Here comes Mono
There is also the Mono Project out there, which contains an open source runtime compatible with .NET CLR on binary level. It means that you can run apps compiled for .NET CLR on Mono CLR without recompilation. Mono’s cross-platform nature makes it much simpler in what it means to deploy the runtime. You just have to fulfill the requirements of the LGPL or obtain a commercial license from Novell.
How it works?
So I downloaded and installed the full Mono for Windows package and started to play with it. My aim was to get a minimal subset of Mono to be able to run the Mono’s XSP development web server and its test web site (\lib\xsp\test\).
The result is the following structure of 45 files only 21 MB in size total:
It is still far from perfection (why ASP.NET applications need System.Windows.Forms, for example), but it is a good proof of concept. I’m only a newbie to the whole world of Mono. But now I can start a web site on a windows machine without any .NET Framework installed from a website folder just like this:
..\mono-mini-2.6.1\bin\run.cmd ..\mono-mini-2.6.1\lib\mono\2.0\xsp2.exe –root . –port 8080 –applications /:.
In order to make it repeatable, I made a batch script that creates this Mono-mini package out of a real Mono installation. It can be used like this:
mono-mini.cmd c:\work\ExternalBin\Mono-2.6.1 c:\temp\mono-mini-2.6.1
I’ve got an XCOPY-deployable .NET runtime for my ASP.NET web services with 3.5 support in under 21 MB (7.5 MB zipped) which works on my machine What now?
- I need to test it thoroughly with the apps I will run on it. It is still possible that some components are missing.
- Also I might remove the Windows Forms dependency but it would require me to patch the machine.config and the global web.config though.
- All this bin\, lib\, etc\ coming from the linux-background of Mono could easily be simplified for the Mono-mini package.
- Some licensing questions are yet to be clarified. When licensed under LGPL, Mono is not allowed to be embedded into a non-LGPL executable, AFAIK. So, making deployment even more simple with Xenocode (only as an assembly repackaging solution) or .NETZ (for the same purpose) is not possible without purchasing the commercial license from Novell.
Friday was a huge day for me! My team officially announced that we a going to implement Scrum. This was by no means an easy change. It took us 9 month and I just want to save this story in my blog for the records.
In April 2009 I was architect in a company using “waterfall” process with 3 month release cycles in a team of 40 people separated in 3 full blown departments of “product management”, “development” and “quality assurance” with its own organizational structures and typical “us against them” mindset. In addition to that there are also technical writers, who have to write all the user manuals on weekends after the code is implemented, and support engineers, who keep distracting developers with all those “urgent customer issues”.
PM was aimed to complete the software specifications before the “spec-freeze” milestone and throw it at DEV over the fence. Specs were long boring documents with faked GUI screenshots and lots of ambiguity. While PMs were writing specs, DEV made housekeeping, fixing bugs or writing code that DEV thought will be useful in future. With a spec at hands DEV coded like crazy to hit the deadline. Then QA started thoroughly comparing the spec with the software and filing found bugs or what QA thought was a bug. Nobody had time to make his or her job right.
After each release we heard the same annoying phrase from our management: “It was the toughest release ever, but we did it!” It supposed to be motivating
After an unfortunate try to present Scrum with all its “pigs” and “chickens” to QA and PM, the word “Scrum” by itself became a taboo…
At that point in spring 2009 I was about to leave, especially seeing some of my colleagues leaving too. But after some management rearrangements in DEV department, I decided to give it a try to facilitate the change to Agile before I leave.
Today we have:
- Sprints, which are 2 weeks long and relative consistent in terms of work committed and work delivered.
- Teams consisting of DEV and QA sitting together in one room (pro team), communicating with each other and helping each other.
- Teams estimate requirements and pull them from the Product Backlog into their Sprint Backlogs during Sprint Planning meetings.
- Teams decide how requirements are going to be implemented.
- Specification is emerging during the Sprint as a joint effort of PM, DEV and QA in form of FitNesse tests.
- Technical writers use Sprint deliverables immediately and have an opportunity to schedule their work accordingly.
- All “urgent customer issues” go first through the Product Owner, who is responsible for prioritizing them and putting into the Product Backlog. If it is really urgent, it goes directly to the Fast Lane on the Task Board to be pulled by a team member.
- A few Certified ScrumMasters and Product Owners.
These are all things that we practice for some time already. Although not everything from the list works frictionless. Nonetheless the “change” on Friday was more like a Scrum training for all teams and an announcement “Oh, by the way, we are actually doing Scrum already”
It was an incredibly hard change, I must say. And it did not come by my will only. It could not happen without support from several people in our company who were willing to listen. Together we did it. But there are still a lot of things to do on the way to Agile. The only thing I know for sure is that there is no way back to the stone age of waterfall process anymore.
I write all this because since then Agile has become my topic of interest and I hope to be writing more on it in future. I stumbled upon many obstacles while bringing agile ideas to different people and it was always fascinating and invaluable experience. Stay tuned