Visualising systems: the NEPAl mnemonic

Last week I posted a general overview of the conference I went to the week prior. In the first session after the Keynote I presented a workshop entitled “Visualising systems”; the details in this were kept brief to be put into their own post. Today I’ll be going into more details on the mnemonic and the thoughts behind it, the workshop, and some of their history.
This mnemonic isn’t intended for use in any specific type of visualisation; it should be generally applicable to any type of model or system that you are wanting to make a visual representation of. Nor is this specific to testing only, I just see these skills and thought patterns as useful in systems thinking, which I see as a key component of skilled and useful testing.
This mnemonic covers tries to help us think about the systems we are visualising in order to visualise them effectively. In the current iteration the mnemonic itself is NEPAl: (yes, the lower case “l” is intentional)
– Narrative
– Elements
– Perspective
– Abstraction level
If after reading this you have thoughts on how the mnemonic might be improved for specific types of systems then I’d be keen to hear your thoughts.

Narrative is first on the list because this is what ties everything together. Stories have been the backbone of human memory and understanding for a long, long time. Stories can be seen as a journey of start, middle, and end…or they can be cyclical, branching, or any other number of structures. One useful heuristic to getting people to remember and understand your visualisation is if people can either understand the story of your system from what they’re seeing, or if they can easily remember your story by using your system as a reminder – stories do well for your visualisations as either reinforcers or reminders.
Narrative structure can be useful in defining how to shape your visualisation. Just as we can recognise the structures of other visualisations and models that we have found useful and apply them to things we make ourselves, stories have well recognised structures and we can apply those structures to our own models, and to the stories by which we remember and explain them. There are many different recognised narrative structures, some of which we can apply to our models of systems.
Some structures are linear; they start at one point, end at a different point, and things happen along the journey. In many types of structures (such as the Three Act structure and similar variants) there are specific points that create tension and problems; twists and confrontations. The resolution of these moves the people and plot towards it’s climax. Many systems can be very simply visualised or explained as linear; what actions happen over time, how money or information flows from place to place etc.
Some other structures, while still being linear, may end at or close to the start. I’ll highlight a few kinds of these quickly:
– Circular narratives, which have the start and end be the same point, although in a different state. An example might be a process map of a complaints ticket queue, which might start with a complaint being made, and end with that same complaint being resolved. Some very specific kinds of these stories exist, like the Hero’s Journey/Monomyth, which frequently end with protagonist returning to where he started from to contrast how things have changed.
– Feedback narratives, where the end of the “linear” narrative impacts the start of the next iteration. Many forms of iterative software development can, and frequently are, modelled this way.
– Ring narratives, where points from one “side” of the story reflect the other “side” in some way. A good example of this in software development is the V model; each point on the left of the model has a matching part on the right of the model. While usually not treated as literally in story-telling, it can also be compared to Newton’s Third Law – things that happen before the fulcrum of a story have some kind of thematic match on the other side of it that is due to it, or resolved by it.
Some other stories just don’t fit these kinds of models. We’ll call them “non-linear” for simplicity. Maybe it’s a gigantic flowchart that could go in any number of directions, loop any number of times, or otherwise isn’t simplistic enough to categorise like this. That’s fine. It might, however, tell us that we don’t have an easy way of explaining our story though, and lack of explainability is one thing that might be a way of recognising a problem.
I originally had this item in the mnemonic named “Tell a story”. Telling a story is the crux here – a story is what ties it all together. Understanding how the story of your system relates to the visualisation will inform on how to think about your visualisation. So, here are some useful things to think about in defining your narrative:
– What story is your visualisation trying to tell? Does it have a clear purpose or theme, or is it trying to be general use? What does this mean for the people who have to use or remember it?
– Are you passing down your narrative explicitly through writing, or some kind of legend, or is it implicitly a part of the visualisation? How might it change, be lost or forgotten over time; will it exist to prompt a memory of things you already know, or will you have some kind of legend to help describe what the various narrative structures and plots are?
– Where does the story of this system start and end? Can it have multiple starts and ends? How do the things that happen in one part of the system impact the other parts?
– Are there any loose ends that should be explicitly tied up? How are each of the twists and confrontations that happen during the narrative best represented and resolved?
Given the importance of Narrative to this mnemonic, the workshop I run to explain it uses shared storytelling; groups of people create visualisations of systems and stories that explain them, and tell these stories to the other groups in the workshop. As an interactive workshop, these groups are giving relatable examples that can highlight many of the techniques that can communicate information using visualisation.

Elements refers to two things which are interlinked: smaller, broken down pieces of the content of your story, and the way you might represent them visually. For instance, if we’re trying to visually model a flow of information, you might have some way of representing each state the information can be in, each place the information can be stored, people or things that access or modify the data, and so on.
By taking the story of your system, you can turn each character, place, item, or quality of those things into an element; take your monolithic story and turn it into simpler, smaller parts. I’ve seen this referred to in testing circles (especially as relates to Michael Bolton) as Factoring1,2; this seems linked to De Bono’s Fractorisation3, and the mathematical process of Factorisation4.
Looking at models that I’ve found useful shows us how they’ve taken a large idea and broken it down into smaller, more useful pieces (say, how the Heuristic Test Strategy or the Little Black Book on Test Design models the qualities of a system under test). Some models used before computer science might be the ways that people have classified animals – before modern science they were divided by prominent features and behaviours.
Aristotle (who was one of the first to spend a lot of time on taxonomy that we still have the writings of) also had another interesting way of modelling “causes” which I feel can be well applied to the elements of the system:
– the Material cause; what substance or material is the thing made from?
– the Formal cause; what shape or qualities does this thing have?
– the Efficient/moving cause; what triggers this thing? What causes it to change, start, or stop?
– the Final cause; what purpose does it have? Why is it in the system?
We can use these causes, or other models that you find useful, to help factor the system into individual elements for visualisation. What types of things is the system made from? What qualities do these things that the system is made from have? What purpose does it serve in the system? What comes before or after it? Should things that are similar to each other be represented in a similar way to make what they are, and how they fit into the story more explicit to the consumer of the visualsation?
As per it’s original name (“Identify elements”), in the workshop that I present this mnemonic in I have people start by taking sticky notes and writing down as many different things (elements) they can think of that might be represented in a system visualisation. I then ask them to form a basic system using all or some of the elements they have chosen, and explain this to the group using an arbitrary story of that system. It’s always interesting to see the different systems that people come up with, given the very basic guidelines to the instruction. Some people create a very generic model of a process or system, others take very specific scenarios that they have been in and model how it happened.

Perspective is all about the lens through which we are viewing the system. If the system that you’re visualising is an objective thing, the visualisation you make of a system is a subjective image of it. It’s a map of the territory, not the territory itself. Given that our map is going to be imperfect, we need to decide what the map will focus on, what it (or it’s users) care about. Just like how in a story there are different ways you can perceive the story (such as from the point of view of a specific character, from a 3rd person perspective etc), how your visualisation is structured will depend on what perspective you wish to view the system from, or what type of element is most important to you.
As described in the section on Narrative, many linear visualisations follow the flow of a single thing: time, money, information, actions, consequences etc. Each of these are great examples of different perspectives you can take on the same system. Which perspective you take will filter what kinds of elements you place, and how you represent them. This is a way to filter the visualisation, lower noise to increase signal.
Some useful questions for thinking about perspective might include:
– “What connects these elements together?”
– “What most differentiates the elements I want to represent?”
– “What information that could be here is better left unsaid or implicit, rather than made explicit?”
These questions will shape what elements you use and how they are represented; it will also shape the map on which they’re placed and what that represents.
For instance, an organisational hierarchy chart generally cares about the control of, and relationships between people; how you map the relationships between the people could represent different types of command and control structures, power dynamics and so on. Lines and dotted lines, the colour and shape of the polygon the position is in are all commonly used. Other options include the absolute position in space representing information about some kind of authority it had (be it financial or otherwise), or the size of the shape itself representing something. All of these things (location, relationship, size, shape, colour, content) are all elements that can be used in visualisation to great effect.
Other examples of using the map itself as part of the visualisation include graphs explicitly labelling their axis with things such as money or time, which forces all other things displayed to use that perspective; quadrant plots pick two axes that matter to the viewer and display the relation of parts of a system to those axes by their position in space; affinity maps have the elements on the visualisation and the whitespace between them represent what the affinity between the elements are – ideas placed closer to an idea share more with it, ideas further away share less.
I originally called this part of the mnemonic “Pivot your data”. I felt that “Perspective” was more approachable and didn’t need as much history for people who hadn’t spent too much time playing around with Excel. To give a concrete example of this step in the workshop, I ask people to take their existing system (which in the previous step they had put together from an arbitrary group of elements), and to think of a different perspective to view the system from.

Abstraction level
Abstraction level is here to give us a more contextual perspective; that our system that we are visualising is both a part of a system, and has other systems within it – that every system is systems all the way down. We choose to hide the complexity below a certain level, or in certain black boxes, because it makes it easier to understand and more useful to us; similarly, the part of our system that we are visualising might be the hidden depths of another system to someone else, and we need to think about where our system connects to, includes, and is a part of that, or another system.
For example, and speaking as someone whose speciality is not in code, code itself is an abstraction. Many of the languages used today are High-level, people write in close-to-language words. We have tools that translate this into something much lower-level. It becomes machine code, and eventually we zoom out and it’s all electrons running around hitting…stuff that I know even less about. But the point is, that we can zoom out and out and out, with our system becoming a tiny speck somewhere, or we can zoom in and in and in and and realise that we’re staring at something that, moments ago, was probably a trivial detail that we wouldn’t have even thought about. But it’s all systems, and they all impact each other. Higher level systems are like the Final cause, the purpose of this system. Whatever we do here impacts something up there, and needs to fit into that. Lower level systems cause hitches from where we needed to smooth over the complexities to make it more understandable (or just because we didn’t understand it); lower levels trade simplicity for leaks.
Perhaps we can think of this as a corollary to Spolsky’s Law of Leaky Abstractions (which I’m sure someone else has already thought of) – “All non-trivial models, to some degree, will be abstracted”. The things that we need to think about when modelling and visualising our systems is which things are safe to be, and need to be, abstracted. Just like how Perspective allows us to filter out elements that just create noise, Abstraction level can help us make informed decisions on how far down the rabbit hole we need to go when mapping our system.
On a testing side-note, I consider one of the important things that testing does is expose these leaks; in many of the projects I’ve worked on, the leaky abstractions that we use in development create many of the defects that matter. Learning to move up and down abstraction levels and how to gather information that keys us into when one of these connections between systems might be leaky is a very useful skill.
Many times we oversimplify things out of habit. It’s important to make a conscious choice about the right level of abstraction to use. Some of the questions that you can ask that help you identify where you are abstracting something without realising, or what level of abstraction might be useful in a given situation include:
– “What comprises this element? If I had to factor it, what might that look like?”
– “What possible Final causes can I think of for this system? What might this mean about the content of the system that I should keep in mind?”
– “How important is it for the perspective I am taking that I have this level of detail? How might a future user of my visualisation find this detail out if they needed to?”
– “Does this level of detail create more noise than signal? Do I have a more effective element to present the information with, or is it better to cut the information off at a certain level of abstraction?”
– “Are the levels of abstraction I’m thinking about abstracted in some way? If the levels blurred or were further divided, what might this look like, and could this be useful information?”
– “Is there any information from a sub/super-system that might better put this system in context? Is this knowledge best served explicit or implicit in the visualisation?”
Originally this part of the mnemonic was “Choose your abstraction level” (for those trying to follow, the original mnemonic was “PICT” – it wasn’t even in a semblance of order). In the workshop I ask for a new system to be visualised from the old one at a different abstraction level. Some of the results are interesting: some are zoomed in or pulled out, some are meta-systems, like a generic model from a previously specific one.

Parting thoughts
If you made it this far, thanks. Seriously, thank you. That’s all for now though – this ended up being more of a beast than I’d wanted, but it’s better published than an eternal draft. I’d be interested to hear your thoughts on the ideas I’ve presented, and if you want me to expand on any given area I’ll see what justice I can do it.
Still on my plate to write about is some larger-scale thoughts on Aaron Hodder‘s “All Kinds of Minds” talk from WeTest Weekend Workshops 2015, as mentioned in the last blog, let’s see where that goes.

WeTest Weekend Workshops 2015: Thoughts

Last weekend I went to WeTest Weekend Workshops, a community-run Testing conference that has been run from the city I live in (Wellington, New Zealand) the last two years, but was in a different city (Auckland) this year. This year was phenomenal and I’ve decided to put some of my thoughts to paper on my experience up there (starting with the sessions I attended) and see where it leads me.

A word from Edwin
WeTest had several sponsors this year, but one is worth mentioning in particular. Assurity have been supportive of WeTest since the start: the two co-founders at the time were both employees there, WeTest Wellington generally runs out of one of their buildings etc. What was notable about their sponsorship is that Edwin Dando who runs the Auckland branch of Assurity asked to say a few words. He explained why they were a sponsor, and why he (who is more known for his links to the Agile community and practices, not to the Testing community and practices) was so keen to be involved.
Edwin talked about his history with Agile: that he was an early proponent in NZ, saw the difficultly in uptake and in people taking Agile seriously or giving it a fair go, and describing the maturation of Agile in NZ over the time he’s been involved. He talked to how he saw the same progress of maturation, development and growth in the Testing community in NZ through groups such as WeTest.
While I’m sure there’s some element of commercial gain in sponsoring community groups, I didn’t doubt what Edwin was seeing or saying. It spoke well to the testing community that our growth is being noticed by other parts of the larger IT community, and it spoke well to Edwin that, from his experiences in the New Zealand Agile community, he’s on board with giving us in the New Zealand Testing community his support. Shirley Tricker tweeted on the day that Edwin’s sponsor talk was “the best a few words from our sponsor ever”, and from my limited experiences, I’d agree (as did others on twitter).

The Keynote for the conference was given by Katrina Clokie (amongst other things, co-founder of WeTest Wellington and editor of Testing Trapeze). The theme this year was “Diversify”, and Katrina gave a punchy speech on four areas in which you can diversify as a tester: How you think, your Technical skills, a Specialism and Leadership & Community outreach. She talked to examples in each of these areas, and had a hand-out with many references and options for development in each direction.
The speech and ideas presented were powerful, and presented well. She made her intent explicit to get people thinking as if there were no standard confines for the career and personal development of a Tester, and I think she succeeded. While my reaction on the day wasn’t particularly strong (I had had spoilers, being privy to an earlier version of the speech), I have already started looking into some of her examples/options (such as Alan Richardson‘s book “Java for Testers”).
The Keynote was recorded as is available on YouTube. It’s not a long watch, and it’s well worth listening to. Katrina has also made a copy of the hand-out available here.

Visualising systems
In the first session I ran a workshop on visualisation techniques. It was a tweaked and updated version of a session I’ve run before (at the previous WeTest Weekend Workshops 2014), this time using the more relatable mnemonic NEPAl (Narrative, Elements, Perspective, Abstraction level), but following the same broad strokes. I plan on blogging on this mnemonic in more detail in the future, so I’ll move on to the other sessions I added and insert a link afterwards to whatever write-up I end up doing on the NEPAl mnemonic.

All Kinds of Minds: Let’s Talk Mental Health
The highlight of the day for me was Aaron Hodder‘s talk (and the following discussion) on Mental Diversity. I think, given the number of tweets flying during and after the talk, that it may have had a similar impact on others too. Aaron had previously presented an ER on the topic at a WeTest Wellington MeetUp which evolved into this, initially about his experiences with Social Anxiety Disorder, but now also discussing some elements of Depression and Autism Spectrum Disorder (ASD).
One of the first ideas raised was that, like the benefits of having other forms of diversity in the workplace, having mental diversity in the workplace brings benefits too. Aaron couched this from the perspective of “super powers” – people who have previously struggled with mental illness, or with the consequences of their form of mental diversity (such as ASD, or even introversion) may have fundamental differences in the way their minds work, and those differences can almost be like super powers. Some examples include that, as Social Anxiety Disorder causes an over-reliance on over-analysis of social situations, people with this disorder are likely to have strong analytical skills. They also tend to be highly empathetic, and other examples were discussed. Depression was noted as reducing optimism bias which could have benefits for testers, and so on.
The flip-side to the super-powers that the mental diverse can bring to the table is the kryptonite that they might suffer from. A large portion of the open season after Aaron’s talk was around how to maintain the care and feeding of those who don’t fit the standard model of the new IT professional in Agile – of which the discussion focused around the usual drive to hire and work with extroverts who interview well, and who can communicate in a particular given communication style. At this stage there weren’t many useful answers, far more questions were raised than answers given. In discussions with others after the conference it sounds like there might be some answers out there already, but not all grouped up in relation to the topic, possibly due to the stigma that still exists for talking about issues that touch on mental health.
This is another topic that I intend to come back to. Aaron has also expressed an interest in fleshing out the topic further and presenting it again in other circles in future, and i hope he does. I personally feel very strongly on the topic. The largest takeaway I had from the session was that the more people who step up and talk about this topic (even if there are legitimate concerns that it might have negative impacts on the way people see them), the easier it will be for others to talk about it; until people are willing to talk about it, there’s unlikely to be change.

Be a great manual tester to be a good technical tester
The third session I went to was Jennifer Haywood‘s session on Technical Testing. This session is one that I would highly recommend for people feeling intimidated by technical testing, or otherwise feeling like they cannot or are having difficulty breaking into the technical side of testing. The core message I took from her presentation was that the dichotomy between “technical” and “manual” testing was a false one; many people buy into the facade that they are a “manual” tester, and that there is some kind of jump that needs to happen to get to “technical” testing. She very effectively broke down these assumptions using suggestions and examples from the attendees, and showed simple and effective ways of using the techniques of “manual” testing that can be used to move into the “technical” space.
Of the sessions I attended I feel that I got the least from this session, but not because it was a bad session. Jen spoke well, recovered from technical difficulties with grace, and presented her ideas effectively. At the time I felt very frustrated in the session due to the assumptions underlying the questions she was asking – but it turned out that was intentional; in fact, looking back on it in hindsight I think that she presented her ideas in a way that I might have done so myself, which has led to me asking some questions about my own reactions to the session and what I can learn from that.
Jen also has presented previously on the topic of Technical Testing, and has written a related article in the latest Testing Trapeze magazine, which is well worth reading.

The Game of Testing
The sessions for me for the day were rounded out by Mike Talks who ran a game in groups of 4-5 people, where one person per group acted like an old-school Binary Search-type game, like one that I remember played around with in QBASIC when I was a kid. Each “build” of the game had different bugs, and the person playing as the computer could only respond in given ways. How the testers in the group handled the responses, and how and when they moved on to new builds prompted insightful comments from Mike on different elements of Testing practices, both good and bad.
I ended up playing as the computer in my group, which got quite far into the game. Some of the interesting curveballs I was thrown were players trying an “infinity” input (which I wasn’t quite sure on whether to treat as a number of not) and the performance tester in the group building a loop to cycle through different inputs and responses. Requesting to modify the program (for instance, log files) was one of the explicit lessons discussed. Mike also talked to sources of information in testing (using the known rules of the game to extrapolate test conditions from), and a small pointed notes on the topic of when we stop testing, based on how much we really need to know the report a bug; an excellent example of this would be the talk in Gerald Weinberg’s book Perfect Software about the difference between Testing and Debugging; do we need to find that a bug occurs, or find reason that a bug occurs? How much of each are we obligated to do before having the information acted on?
Mike’s session was an excellent one to finish the day on, It was engaging and energising (which is very a good note to close conference sessions on), and his insights still brought learning to the table despite our beleaguered minds (I at least had felt quite drained after the second session).

Other sessions:
John Lockhart: Incorporating traditional and CDT in an agile environment
Lightning Talks:
Craig McKirdy: Our future testers haven’t left school yet
Jennifer Haywood: Diverse Teams – The Myth of The Perfect Tester
Natalia Matveeva: I want to be a tester. What’s next?
Kateryna Nesmyelova: Miscommunication
Katrina Clokie: Become someone who makes things happen
Vikas Arya: Tester Accountability
Viktoriia Kuznetcova: Going to the Clouds
Adam Howard: Talking the Walk

One notable session here to me was Vikas Arya’s session on Tester Accountability – I was luck to talk to Vikas before the Keynote started, and thought his session sounded interesting, including a lot of discussion around how you justify your methods of testing and reporting (which is a topic of personal interest to me). There’s also a mention in the latest Testing Trapeze that he’ll be writing an article next magazine, which I’m sure given the feedback I heard for his session will be excellent. I’ll be keeping an eye out for Vikas’s work in future.

Wrapping up
It’s been nearly a week, it was a pretty intense day, and it’s provoked a large amount of thinking and overthinking in me, so it’s good to get at least some of it out in writing. My next two things that I intend on writing (as mentioned above) are more on the session that I ran, and further thoughts that have come out of Aaron’s discussion on Mental Diversity – so watch this space.
I’d like to give a big round of applause and congratulations to the organising team from WeTest Auckland (Shirley Tricker, Erin Donnell, Morris Nye, Kim Engel, Natalia Matveeva and Jen Hurrell), who did a fantastic job. This year was absolutely spectacular, and I’m looking forward to seeing what happens next year between the Wellington and Auckland WeTest communities.
Also, a shout out to the other people from WeTest Wellington who made the trek up to Auckland (many of who also volunteered their time to present and/or help set up the location), both for the support that it provides the testing community in NZ, but also because those that were on the same flight(s) as me were excellent company, even when I was completely out-peopled in the evening.

Parting thoughts
Were you at WeTest Weekend Workshops? Did you have a different take to me? Did you go to other sessions? How did you find it? I’d be keen to hear from you, be it in a comment, over Twitter or other methods. If you weren’t at WeTest and we’re local, would you consider going next year? If you’re not local, have you had success with, considered looking for, or even starting a community group?