Jeff Atwood recently wrote a blog entry saying the best (and maybe only) way to REALLY learn how to write software is to actually DO IT. Although I'm a great believer in learning by doing, Jeff's post did get me thinking about how I would design a university course to help teach the things you learn “on the battlefield”.
Start - Students are divided up into groups of 3-4, given a loose specification and told to implement solution in technology X by date Y (12 weeks away). Students are given a “budget” of time that they need to spend on the project each week (say 4-6 hours per person) but their involvement in the project should stay capped at that 6 hours/week. Students would be required to keep a short diary of what they are spending their project time on.
1 week in – no changes. Any requirements clarification requests by students are answered vaguely.
2 weeks in – students are told they need to have a working prototype ready in 2 weeks.
3 weeks in – students are told they must use component Z that will be supplied to them to perform a certain thing. 1 major aspect of the system will be changed (for example “it has to work offline” if it is a web app, or some crazy security model that needs to be applied).
4 weeks in – prototypes are evaluated. Students are told to scrap prototype as second big functional change is introduced (“it has to be integrated into outlook”, or “it cannot use any client-side script”). Teams are re-organized.
5 weeks in – breaking change to API of component Z
6 weeks in – students are told the system must use data from a legacy system, which will be made available to them in coming weeks. A “representative sample” (laughs sadistically) will be provided to them (tell them in week 8). Representative sample is small, and structured in a very different way to the way the problem domain is currently understood.
7 weeks in – some requirements changes
8 weeks in – merge teams, tell them to “consolidate” code. More requirements changes. Another breaking change for component Z.
9 weeks in – More requirements changes, One team member is randomly chosen to be “team representative” and will be tasked on writing performance appraisals for other team members. Team members will be asked to rate each-other. Highest/lowest ranking member from each team (based on peer-peer ranking) will be swapped with another team.
10 weeks in – Legacy data is delivered. Some fundamental information is missing, and shows major discrepancy in way requirements are defined vs. data present to support requirements.
11 weeks in – minor but annoying change to the environment the software has to run on. Possibly some left-field performance requirements thrown in.
12 weeks in – project completion.
13 weeks in – students are asked to write up what challenges they faced on the project, the risks they faced and the lessons learned. Students could also optionally deliver the solution here if they have negotiated to do so.
At any time students could negotiate to deliver less, deliver later, or “spend” more resources (although they would not be explicitly told they could do so). Push-back on requirements changes and schedule re-negotiation (based on said changes) would also be allowed but also not explicitly stated. Although this sounds somewhat sadistic I think it would go some way to prepare students for “real-world” development. Each Student’s final mark would be based on the way their team’s degree of success delivering the negotiated features on time and on budget, and their write-up of the lessons learned.