John Smart

Author
+ Follow
since Aug 06, 2013
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
5
In last 30 days
0
Total given
0
Likes
Total received
6
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by John Smart

Thucydides (http://thucydides.info) is an open source project that tries to address a few of these concerns, and make large-scale acceptance tests easier to write. For example, Thucydides makes it easier to build libraries of reusable methods which are used to implement the Cucumber-JVM or JBehave step definitions. This makes it easier to reuse logic between Cucumber-JVM or JBehave step definitions. Step libraries are automatically instantiated and injected into other step libraries as required, which makes it easier to share information between steps in a modular way. Thucydides also provides the idea of a thread-local session where you can store arbitrary variables.

The Cucumber-JVM integration with Thucydides is in beta-testing currently, and should be out by Devoxx (early November 2014).
9 years ago
Take a look at http://wakaleo.com/thucydides-sample-reports/index.html for a basic example of high-level living documentation.
9 years ago
All of the above :-). Living documentation refers to documenting your high-level requirements in terms of business-readable executable specifications and examples - the reports that are generated ("test reports") document not only the tests that were executed, but also the requirements as they have been implemented. This is typically done on the CI server.

When you take this to the next level, and integrate your living documentation with your requirements stored in tools like JIRA or Rally, you get not just a readable test report, but documentation that cover your high-level requirements (epics, capabilities, features...) as well. When this is done, the requested requirements, executed tests, and project status is automatically synchronized.

At the unit-testing level, "living documentation" refers to writing your unit tests in the spirit that you are writing not tests, but technical documentation, API examples, and low-level specifications for your code.
9 years ago
Some good tips from Junilu. In addition to these, here are my $0.02:

I anecdotally know there are places in London who have implemented BDD very successfully - some of the London BDD crowd might be able to elaborate further. In Australia, where I do the bulk of my consulting work these days, there are many large organizations that have successfully introduced BDD practices. Of the four major Australian banks and the Reserve Bank of Australia, at least three have already introduced BDD practices, and the other two are in the process of doing so. I have also worked with several large insurance companies who have had a lot of success introducing BDD practices.
- Management buy-in is essential. BDD practices potentially have the same scale of impact as introducing something like Scrum - it changes the way people work, particularly the BAs and the testers. This has to be pushed from above to some extent.
- When you first introduce a new practice, you will get it wrong in some way. And that's fine; everybody does. You will need a few iterations (and here a bit of external coaching does wonders) to bed down the practices and tailor them to your organization. Cater for this when you plan the introduction process.

Introducing TDD can also be a challenge, but mainly because it requires significant "buy-in" from developers. Teams that successfully introduce TDD in my experience do it by (a) training the whole team in proper TDD practices, and then (b) getting an experienced TDD practitioner to coach/pair with developers until they "get it", and also to help adapt the technical testing practices to their particular projects and environments.
9 years ago
Junilu sums it up pretty well: BDD and TDD are not mutually exclusive. BDD at a requirements level is about discovering and understanding what features you need to build ("building the right software"). The acceptance criteria that emerge from this process help guide the more detailed coding, where BDD practices can be applied to TDD very naturally (BDD was after all originally a way to teach TDD best practices). You can think of BDD at a unit testing level as "the way many really good TDD practitioners do TDD". If you want to call that "TDD", that's fine. If you want to call it "BDD for APIs", that's fine too. Personally I find the top-down approach that BDD encourages helps focus your TDD efforts. After each test, you ask "what else should this feature do", or, more precisely, "what else should this feature do to satisfy the acceptance criteria"? I also find it useful to think in terms of writing worked examples and specifications for my classes, because that way I understand what I was trying to do when I come back to them later.

BDD also helps you know when you have done enough. Basically, once I have confidence that a class does what it needs to do to help meet an acceptance criteria, I can move on to something else. In practice, this means that I can't think of any other answers to the question "what else should it do?". But if there is significant complexity buried in an implementation, that would probably worry me, so I might pull that out and write low-level specs for that code as well. I think that is what Junilu is referring to.
9 years ago
Many organizations do indeed prefer "quick and dirty" solutions hoping to get the code out the door faster, rather than investing extra effort in more quality-focused practices. These organizations are often also reluctant to adopt more modern development practices in general if it changes the way they have always worked.

That said, it is hard to generalize. A large number of the clients I work with to implement BDD and related practices are large banks, insurances, or government services. They take a few months to get it right, and they don't do everything perfectly to begin with (as with any new practice), but the ones who persist have some great success stories to tell (see for example http://www.scrum.com.au/2014/sessions-2014/#Rowan Bunning). I also know many people in smaller companies and startups who are keenly aware of the value of quality practices.

I don't think experienced practitioners see a dichotomy between BDD/TDD (and code quality in general) and speed: other than the initial investment in learning the new techniques and changing a team's habits, BDD practices tend to make you go faster, because (a) you focus on writing code that serves a known purpose, (b) you spend a *lot* less time debugging (closer to 90% than 40% in my experience) and, most importantly (c) you avoid implementing features that would end up not being needed after all, and focus only on the genuinely useful ones. The collaborative nature of BDD helps you not only concentrate on what features are the most important, but also on how to build them in the most efficient way (the devs are good at this) and making sure you haven't forgotten anything (testers are good at this).

The study referred to in chapter 10 needs a bit of context, as the numbers might be misconstrued. It was a study (from 2008) on TDD (not BDD) practices, done with university students or new developers who had no previous experience with TDD. In other words, they were learning TDD as they went. I think this would explain the additional time taken (40–
90% fewer defects, and 15–35% increase in the initial development time). Also, the 15-35% "increase in initial development time" didn't seem to integrate bug fixes. As Kent Beck once said (I think), it is much easier to write code if it doesn't have to work ;-). In my experience, an experienced TDD/BDD practitioner will typically take slightly less time to deliver a feature than a developer not using TDD, and that feature will be virtually bug-free.

Personally, even for my own personal projects, I practice a BDD-style of TDD at the coding level for almost all code because I find it allows me to get code out the door faster.
9 years ago
Small companies are very interested in reducing wasted effort and produce value (think "lean startup"). BDD is essentially a technique to do this at the requirements level, so explaining how BDD relates to discovering and driving out value can help a great deal (not just "writing code", but "figuring out what code to write to produce the most value"). Techniques such as feature injection and impact mapping are a great way to identify valuable features, and conversations around concrete examples are a great way to build up a shared vision and get buy-in and ideas about new features.
9 years ago
In short: BDD at the coding level (AKA BDD-style TDD) is possible and valuable for any team: benefits include fewer bugs, more focused development activities and great technical documentation for your APIs in the form of well-written and well-structured unit tests. But it is at the requirements level that the real benefits of BDD become obvious: more relevant and more valuable features, a better shared understanding of the requirements, fewer misunderstood, misaligned or forgotten requirements, less waste, more efficient use of tester time, etc. These benefits hinge on collaborative discovery of the requirements, which is very hard to do in a prescriptive software development approach where the requirements are "completed" and "signed-off" before being handed to the development team for implementation. Similarly, the role of the tester is handicapped if she can only intervene at the end of the process, or to automate regression tests. So in essence, BDD without the ability to collaborate on requirements discovery is somewhat ham-stringed.
9 years ago
In my experience the BDD model is about collaboration and conversation first, and tools second. The general BDD approach of collaboration, and discovery of features and requirements through conversations around concrete examples, holds for any technology stack. Cucumber or similar is available for most modern languages, though it is perfectly possible to write automated acceptance criteria through a web interface or via web services in a different technology to the one used in the main project. Automating acceptance criteria in dynamic languages such as Ruby and Groovy is a bit easier than with static languages such as Java and C#, but not significantly so, and other factors (such as the technology stack the team is used to and is using on the project) can have a much bigger influence on productivity.
At the unit testing level, more modern languages and tools (such as Spock for Groovy and Java, Spec2 for Scala, and to a lesser extent tools like RSpec and NSpec) do allow for much more expressive and concise executable specifications, which makes practicing BDD that much easier, but BDD in JUnit or NUnit is perfectly possible as well.
9 years ago
Hi Krystian,

The book requires no experience in TDD, and teaches everything assuming you are new to the field. It is a book that is accessible to beginners and is aimed at both developers and non-developers (BAs, testers, product owners). That said, it does treat many subjects related to BDD in a very in-depth manner.
9 years ago
Strictly speaking, Scrum doesn't mandate a great deal of things regarding technical practices - it focuses pretty much entirely on project organization matters. But Scrum experts will confirm that you can't do sustainable Scrum without good technical practices. Consider:
- http://martinfowler.com/bliki/FlaccidScrum.html
- http://blog.mattwynne.net/2013/08/12/half-arsed-agile/
- http://www.scrumalliance.org/community/articles/2009/november/the-top-six-technical-practices-every-product-owne
- "Improving Technical Practices Is Not Optional" - Mike Cohn, "Succeeding With Agile", page 171 (the book as a whole is worth reading as it deals with a lot of objections of this sort)
10 years ago
If they get out of sync, it's easy to spot: the tests don't run :-). For BAs and testers, the separation keeps the requirements clean and isolated from the implementation, which makes them easier to work with collaboratively. But you do have to be well organized with larger projects to avoid stepping on everyones toes. I'll include some techniques to do this in JBehave in the book somewhere.
10 years ago
I didn't mean that the Given-When-Then is overhead for unit tests - in fact it works really nicely with Spock. When you use tools like JBehave or Cucumber, the scenarios (Given-When-Thens) are in plain text files, and the test implementations is in source code form. Putting the requirements in text (.story or .feature) files is more overhead, as you need to keep the specification texts and the implementation code in sync, but the overhead is justified for BDD acceptance criteria because it makes them much easier to use as a communication tool. For unit tests, something like Spock is better.
10 years ago
Data-driven testing in Spock is pure awesomeness :-).
10 years ago