Software Engineership - How To Think About Software
Software Engineership - How To Think About Software
This interactive session will talk about the past and future of crafting software and how it is changing the way developers and companies think about software development.
Moderated by Nix Crabtree, join the conversation, share your insights and probe the speakers on the elements of their talks that left you wanting more.
Nix Crabtree: Welcome to this panel session on Software Engineership. As Frank said, I'm Nix Crabtree I'm the Lead Principal Software Engineer at ASOS. Part of my role there is to constantly make engineering at ASOS and our engineers better. And a key part that has been and still is software cropping. So, it gives me particular pleasure to host this discussion especially with the four fantastic panelists joining me today. Let me introduce them to you. We have Dave Farley, who is the co-author of you all heard it, Continuous Delivery with Jess Humble and the founder and managing director of Continuous Delivery Limited where he works as an independent software consultant, advising companies around the world on the topics of Continuous Delivery. Team organization, software development process, automated testing software designed for high performance and software design in general. We have Savvas Kleanthous, Head of Engineering of ParcelVision with expertise in the major design, event sourcing, CQRS, high throughput, low latency systems, scalability, software analysis, design, and architecture. Lisa Crispin, Lisa is a Quality Coach at OutSystems and helping to build an observability practice. She's been a hands-on tester on Agile teams since 2000. She's a co-worker with Janet Gregory of three books, Agile Testing, More Agile Testing and Agile Testing Condensed. She and Janet are co-founders of the Agile testing fellowship, offering the Agile testing for the whole team training course in person and virtually by training providers around the world. And finally, we have Feross Aboukhadijeh who is an entrepreneur programmer, open source author and mad scientist. Feross has built a number of innovative projects, latest of which is Speak Easy, kind of like virtual speed dating meets, professional networking. Hope people seen that and try it out during UXDX this week. You haven't, please do check it out. Thank you all for joining me today. I'm going to start with a Dave. So, we hear the term software crafting a lot and that has the connotation of artisan measure by honing a particular skills or years of experience. And for me, it's a great way of reframing how we approach writing software. But does that term paint the whole picture for you?
Dave Farley: I don't think it does. I don't think it quite goes far enough. The first thing that I should say is that I think that software craft was a great idea, but I think it was an important idea that moved us forward from where we were. So, I think that we suffered from several decades under the heavy yoke of trying to apply production line thinking to software development. And it's not that kind of problem. Craft is a much better fit for the nature of our problem, which is always a problem of discovery and learning and so on. But as you said, when you were describing craft, Craft is kind of by definition limited to the kind of human capability and that's fantastic. But what engineering does in general, in other disciplines is that it takes craft and it amplifies it so that we can go beyond human limitations. I think that we need to retake the term software engineering. I think we need to reframe it so that it really is an engineering discipline for software development in almost every other field that I can think of engineering essentially means the stuff that really works. In software development, we've come to take on assume that's what software engineering means is that overly academic, overly bureaucratic stuff that other people do. And I don't think that's correct. I think that we can amplify all of the good things about craft by applying a little bit of engineering discipline on top of the craft. And that's really the stuff that really interests me.
Nix Crabtree: Yeah. And in fact, that it was Margaret Hamilton who's working on the Apollo program. He coined the term software engineering.
Dave Farley: Indeed. I'm currently the middle of writing a book which is about this topic really. And I did a little bit of research about the history of software engineering. And it's fascinating because there was a movement back in the late 60s that we would think of as kind of advanced Agile thinking. There was test driven development described by Alan Perlis in 1968. And there were people doing this kind of really discreet iterative feedback led experimentation to build software systems. And that's sum of the stuff that Margaret was doing to build the flight control systems for the Apollo program when she coined the term. So, I think that engineering has got a bad rep in our sphere. And I think that's a mistake. What we tend to think of when we talk about and that engineering is production engineering, which is a different kind of thing. We are design engineers and design engineering is about learning, experimenting, trying stuff out, breaking things, finding what works, what doesn't. I am currently completely obsessed watching Elon Musk blow up spaceships as he tries to evolve in towards a rocket that can go to Mars. He's destructively trying out his ideas and that resonates with me. That seems what real engineering at the edges of knowledge is like. And that's what I think we should be doing.
Nix Crabtree: Thank you. Savvas, we know that combining our crafting and our engineering skills is essential for writing high quality software as they just outline but it does it stop there or is it also important for us to build a shared understanding of the domain we're working in as well?
Savvas Kleanthous: Thank you, Nix. It's a really good question. In my career, I started as a software engineer, I moved to a more senior positions, like a team lead and a tech lead. Lately I've been working as an architect and the head of engineering for ParcelVision and through that time, I got a chance to experience product building from different perspectives. And at the same time, I got to speak in any product closely with people from varied backgrounds within the same company, building the product. And I think the turning point in my career was when I actually internalized that products are never, well, very rarely will by a single person. In vast majority of cases, a product is built by a team and while it's important for everyone who is participating in building a team, be the software engineer, be the subject matter expert on anyone really to be the best that they can do and software engineers need to perfect their craft. Fundamentally is the team at this building the product and if the team that needs to perform well in order to build a successful product and exactly because of that again, in order to be successful in building the product we need to build a shared understanding of the behavior that we want to deliver to production. We need to have a shared understanding of the goals that we have as a team. We need to have a shared understanding of the problems and the opportunities that we have in that product as well. So, moreover, as I mentioned, it's extremely rare that the development team is actually the subject matter expert of a particular product. So, what happens mostly is that the development team is working with other people to understand how the product has to behave. I think a favorite quote of mine is one from Alberto Brandolini, the creator of Event Storming, which is not the domain expert knowledge that goes into production is the developer's understanding and interpretation of that knowledge that goes into production. And I think that uniquely describes the situation exactly why your understanding is quite important.
Nix Crabtree: Yeah. We use Event Storming at ASOS and we've used it to understand some particularly complex domains and workflows. And at each of these worked really effectively, it is an investment in time and effort. And you need a team that is happy to get in a room and spend potentially hours running out innumerable post-it notes are different colors, but the result is quite astounding and one of the exercises, we used it to train people in how to start Event Storming is to event storm PacMan and of course you get a room full of developers and all say, "Easy, and then an hour later they're scratching their heads going wait but how did the ghosts catch PacMan?" And then you start to work out that every time PacMan eats a dot, he slows down by one frame which is enough for the goats to get in. And then where does he turn at the end. It's actually becomes quite pleasurable as a software craft, as a software engineer, software developer to challenge yourself, to think things through those details but I mean, that's a very right at the implementation phase now. There are different levels of Event Storming that we can use?
Savvas Kleanthous: Yes, of course. Perhaps the most important and the most useful version of events storming is the big picture events storming. So, as you have described that you find that little surprises when you actually move to what at least close to the implementation level. But at least from my experience, most of the companies that I have worked with especially when they tend to become a little bit large, there is very little visibility on how things are, how different teams work with you. And that is something that I think we lost the translation from a very old what full processes which was definitely something we wanted to move away from but during that migration from flat to Agile it seems that a lot of companies lost their way into the went a little bit further ahead I guess, no design upfront and no visibility of what goes on and that's kind of the problem that we're seeing. And a lot of people not knowing how other things work. What being repeated across the team problems, being solved on one side of the fence, but not solved in the other lessons, not getting transferred across. And so, I think that big picture events storming is actually quite useful because it allows everybody to understand how the domain works. So, not so much the software itself, but actually how the product works together. So, what kinds of problems do we have? What kind of end to end workflows exists? So, not close to software but fundamentally what the users are doing, how things are working internally and when to fulfill a need.
Dave Farley: First, it was a really good description. I was recently doing some work to try and synthesize some recommendations for junior software developers. And they just came up with a bit of a meme that I liked. So, I think kind of captured what you were just describing the fundamentally, our job is not to write code. Our job is to solve problems. So, problem solving is more important than design, ultimately people care less about design more about the problem solved or not and design is more important than coding and coding is more important than languages and tools. And that's kind of the ordinary which I think I would rank our skills as practitioners. At the top, we need to be problem solvers and you talked about investment earlier on the time that goes into event sourcing. That's an investment in understanding so that you can better solve problems, which is never wasted or usually not wasted unless you're getting an analysis paralysis.
Nix Crabtree: I mean, if we're not solving problems when we write code then effectively, we're just glorified data entry clocks. Great. Thank you for that. Lisa, across working with maybe with expensive materials or an engineer working with complex mechanisms or structures we'll probably start by experimented with mock-ups before they commit the time and materials to building a final product. Is experimentation something that could also bring value to the way we write software?
Lisa Crispin: Yeah, I'm a huge fan of what Linda Rising calls small frugal experiments. So, I think it's great to start with the low fidelity paper prototypes or virtual whiteboard prototypes, and we can test those. And of course, I come from the testing perspective, we can test our feature ideas and then we can use techniques like Event Storming to identify what's a thin slice, something MVP if people want to call it that, that we can use as a learning release to build a hypothesis, we're solving problems as Dave says. So, what problem are we trying to remove for the customer as opposed to adding a feature and how will we measure that? And then again, as Savvas was talking about the domain knowledge, I think that's where the domain knowledge comes in. We have to understand it well enough to know what do we want to learn? How do we measure that? How do we get that data when we put it in front of some customers to get feedback? And that's where we need to be smart about how do we instrument our code to capture that data, make our code testable, but also operable so that we can set up ways to look at the data from production. I'm a big fan of observability and trying to learn about that because we have to understand exactly what our customers doing and how are they using these features? Is it solving their problems and take these really short small increments of learning and then slowly build it up. And so, we know that wasn't of any value to anybody. Let's try something else. And so, those small experiments where you have a hypothesis and you have a way to measure it. And I think that's where teams really struggle, how do we measure it? How do we know when we're successful? But I think that iterative processes are really the way we build good things for our customers.
Nix Crabtree: And is that something that exists just within the team or does it cross boundaries into UX or architecture, or there are experiments that we can do hand in hand with other areas?
Lisa Crispin: Well, I'm a fan of a whole team approach. So, I would hope all those people are on a team or at least collaborating really closely. So, if we're a software delivery team and we don't have the designers on the team, or we don't have the architects on the team, we need to build those bridges to those people, establish those relationships, get their help, and really work closely with them because I don't think anybody can do this in isolation. I think we have to all be working together on it. And I know in a big organization that gets more challenging. I'm in a big organization for the first time, but you can still say, "Hey architect, we need to have a conversation. Please come help us think about our design or think about how we want to implement this." This thing we think will solve a problem for our customer.
Nix Crabtree: And I mean, we talked about observability in your intro. Is that something that can be part of experimentation or is that something that comes late?
Lisa Crispin: I think you need it for the experimentation. You need to get the data and started to get the data. You've got to capture the events that happen as people use the system or as other systems use this and whatever the user is and be able to understand that data and drill down. It's like, "Oh, looks like there was a problem here or the performance was really strange here." And be able to dig in it quickly and be able to respond to it quickly. So, hopefully if you're doing a small experiment, hopefully you're just giving learning releases to a small number of customers. And so, you're not causing very many people paying if things go wrong but being able to respond quickly, roll things back or fix things quickly is really important and that's where the durability comes in. We can't anticipate everything users will do. We can't replicate our production environments in our test environments. Just no way to do that and so to be able to get the data from production, be able to ask the questions to our production system that we didn't know in advance, we'd have to ask because we didn't think of them in advance. We can't know everything in advance. And so, this kind of fills out and I see it as part of testing and it kind of fills out. We can test all we want to before we release something, but we need to be able to still learn from production and respond quickly there and be able to test safely in production. We could do that. We can do that today, too with all our great technology. So, we have so much technology now that supports our ability to do these kinds of experiments.
Nix Crabtree: Okay. Thank you. Feross, the story we don't hear usually about either Craft is all engineers is what happens after they create their piece de resistance as a civil engineer in this modern age go and check if hashtag worst bridge ever is trending on social media, is feedback the critical point at which we actually surpassed these analogies to inform and adapt to essentially a living evolving product?
Feross Aboukhadijeh: I think that's right. I think that the main difference between a civil engineer and a software engineer is obviously that the civil engineer doesn't really get a chance to redo or fix up their mistakes in the same way that a software engineer does. There are those stories of bridge failures that led to structural problems and attempts to retrofit things after the fact and it's not ever a good place to be in but with software, we don't have that same constraints. It's very easy to deploy a code constantly. So, I mean, that definitely changes the way that teams should think about how they build stuff. I think planning is still super important in the engineering and the software context but I'm personally a fan of much more of just spending as little time as possible in the planning phase and just getting something out there. Of course, I'm a little biased because I tend to build a lot of things on my own or on very, very small teams where there's not as much cost to getting stuff wrong so that may be different in scenarios where you have larger teams coordinating with each other and where the cost of making a mistake is higher. But I think there's only so far that planning and preparation can take you because once you get faced with once you encounter reality, the reality is going to show you how your plan was wrong and that's not to say that it's not worth having a plan, but I think the plan will never survive contact with reality. So, I like to spend as little time as possible to get the idea out into the world and to get the initial implementation out there because you can only learn so much sitting in a room pontificating about what would happen what could happen when you release this. Certainly from a product perspective, it makes sense I think to just get it out into the hands of your users and see what happens especially if you're in around where the cost to building something and getting into the user's hands isn't that long for somebody who ordered a couple of weeks or month or so of building then it might be easier to just rather than spending months and months thinking about what could happen, just put it out there and see what people do with it. I think that there's just a lot of examples in my career where I've experienced that just putting something out there taught me way more than I could have learned if I kept just trying to guess what users would do and an example, the really recent example from Speak Easy was for those of you who have had a chance to use it, it's a video calling platform where you get matched one-on-one with people you get a chance to talk to. The initial version of the site was this giant public Speak Easy where everybody would get matched with everybody and that was the kind of a wrong decision because there's not really context if you're just matching a random person from some random place in the world with you. There's not really a shared context that the users have. That's probably something I could have realized before building it, but there's nothing like seeing user getting matched to someone who literally doesn't speak English and being like, "Oh yes, this is a problem. Okay. All right. Let's rethink this a little bit." Maybe that one was predictable but there's other examples where I don't think I could have predicted, it seems easy in hindsight like other examples users, I noticed this user behavior where people would they would block their webcams like this with their finger and chat with each other. And I was like this seems like users misbehaving, users you I don't know not talking on equal footing with the other users that they're talking to and so this is a behavior to be discouraged or banned, but it turns out. I actually just use the product to talk to some of those users and I learned that actually, they really just wanted an audio option. They were like, I mean, it's kind of obvious when you think about it. They're just trying to use the app without showing their face, they feel more comfortable talking and answering the types of questions that were being posed in some of the Speak Easy with their video off. And so, that was a very clear thing where it's like, "Oh, an audio option actually makes a lot of sense." Whereas before I launched it, I felt really strongly that we didn't want to have audio that we wanted to have face to face contact between everybody. I think I'm reminded of the quote from the matrix from Morpheus, where he says, "Neo, sooner or later, you're going to realize just as I did, but there's a difference between knowing the path and walking the path." You got to put your stuff out there and see how people take it.
Nix Crabtree: Thank you. So, we're going to go to audience questions in a second but before we do that, I want to ask all of the panelists. What is your favorite line? Let's start from the very beginning. Dave?
Dave Farley: Yeah, this goes back a very, very long time. I used to have a Zedick Spectrum and there was a book in the Zedick spectrum realm that if you did a plot command, a very esoteric plot command, just one line, it used to build this big kind of Mandela more pattern on the screen. And I used to know it off by heart, and I could just kind of go up and tightly sun and just take over the screen on a Zedick spectrum. And it was just one line and I was always dead impressed with that.
Nix Crabtree: Savvas, how about you?
Savvas Kleanthous: That's a difficult question for me. I tend to work quite a long time on the products that I'm involved and most of them carrier, I ended up deleting more than I contributed at some point. So, I tend to not get really attached to the pieces of code because I probably will end up deleting everything. Well, not everything but at least every individual piece of code will change eventually in the lifetime of the product. I don't think I have unfortunately a favorite line of code. I tend to have like products that I like or pieces of libraries that I wrote that I enjoy.
Nix Crabtree: Thank you. Lisa, how about you? Lisa?
Lisa Crispin: Yeah, sorry. I muted myself. As Savvas made me think of he talks about deleting code. I think that's really a great thing to do. We don't do it enough, but I guess I haven't been Programmer on my own for a long time. But pairing with developers, I guess my favorite thing has been to pair the developer to analyze a flake so-called flaky automated test that was marked flaky. And so, nobody paid any attention to it only to find it really was a bug. It was just a bug that didn't happen every time and was hard to reproduce it. It was like maybe a timing issue and being able to fix those and prevent production regressions was very satisfying.
Nix Crabtree: Thank you. Feross?
Feross Aboukhadijeh: So, I'm going to cheat a little bit. I don't think one line of code can be that interesting. So, I'm going to say like my favorite a hundred lines of code. So, one of the projects I've worked on was I was trying to figure out what is the most annoying website one could build if you were to use all the different web features that the browser now affords us and with HTML five and all these new powerful APIs, there's actually quite a lot you can do in terms of putting together all of the features to make the worst possible website experience. There's about a hundred lines of code I wrote, if you want to go see it. I don't recommend doing it in your primary browser because you may actually need to force quit your browser to escape the website. It's actually that bad but maybe you write this URL down and try it out later. It's the annoyingsite.com you type that in and hit enter you'll have a very interesting experience.