Unity is strength

In 2020, the event industry made us proud through a demonstration of resilience and adaptability. We pivoted to build hospitals, united to support charities, protested to raise awareness; and all the while we tried to keep events alive through whatever medium could safely facilitate us. Although they deserve it, I’m not going to post a celebratory end of year review about all of these amazing achievements. Instead, I want to recognise and applaud what I see as the most important win of 2020; our increased cooperation as an industry.

As events began to slowly close down in March, something new seemed to be happening. People began to share their knowledge, their experience, their challenges, their solutions. Experienced professionals delivered webinars explaining how they did their jobs. People posted their risk assessments on LinkedIn. Organisations united on Zoom to deliver their thoughts to the wider industry. Education was offered for free…..In a time of uncertainty and instability, we unified as an industry in an attempt to become stronger.

It might seem the obvious way to overcome a challenge, but our industry hasn’t always been this open. Individual events often operate in a silo, and when accidents or incidents occur they are rarely shared with the wider industry. Whilst all organisers are usually working to deliver safe, commercial successfully, culturally relevant events, we seldom cooperate with other organisers to meet these shared aims. Be it through the fear of competition, or the impact of confirmation bias, we often operate as isolationists.

But if we don’t share and cooperate then we open ourselves up to failure. Toft (1997) affirms that a failure occurring in one system will have a propensity to recur in a similar system; essentially meaning that an accident happening on your competitor’s event has a high chance of also happening on your event. Whilst our socials and press usually mean we are aware of incidents when they happen, such occurrences frequently stem from deeply underlying latent failures (Reason 1990; Turner and Pidgeon 1997). Hence, unless we cooperate and share our learned experiences in detail, our industry colleagues are unlikely to understand their full causation. And how can we work to prevent accidents if we don’t understand their causes?

Isomorphic learning is considered essential to the development of safety in other industries (such as aviation, through the AAIB). This type of generative safety culture is what we should be aiming for. In 2020 we took big steps to increase cooperation and learning; but with a long way to go it is pivotal that we move into next year with this as a central theme. Now is the perfect time to adjust any insular operations, and begin to listen to the views, experience, and learnings of others in order to prepare us for a return to this changed world. Let’s adapt our culture to share more widely and support more openly, so all events have the chance to survive and thrive in 2021.

So as we move out of this challenging year, we should celebrate our resilience and whatever small wins we may have achieved; but let’s also sieze the opportunity to establish a culture of cooperation, where we can benefit not just the future of our own events, but the industry as a whole.

Janis, I.L. (1972), Victims of Groupthink, Houghton Mifflin, .Boston

Reason, J. (1990), Human Error, Cambridge University Press, Cambridge.

Toft, B. and Reynolds, S. (1997), Learning from Disasters: A Management Approach (2nd Editi on), Perpetuity Press Ltd, Leicester

Our perception of risk

We are all familiar with assessing risk. Whether it’s writing a risk assessment or dynamically acting on the ground, we have undertaken hundreds (if not thousands!) of risk calculations. Yet how often do we consider that our risk assessments might be distorted or biased? How often are we actually creating the most effective risk assessment possible?

When undertaking risk assessments we can be forgiven for thinking that methods such as ‘probability x impact’ will result in analytical and rational calculations of risk. However, in reality, these reasoned responses are potentially influenced by subconscious short cuts, which are thought to be used by our brains to speed up decision making in dangerous situations. These cognitive short cuts, referred to as heuristics, are much less analytical than we would like; instead, they are emotive and affective rules of thumb that are based on things like our past experiences and the way our brains store information. The quick-fire responses reduce the complexity of decision making; so, if we are being chased by a lion our heuristic response means we don’t need to spend a long time assessing if we should run away. But often unbeknownst to the risk manager, heuristics can impact risk calculations by embedding subjective biases into our assessments. In order to ensure we are producing well-rounded and effective assessments, risk managers should be aware of what these biases are and how we can manage them.

As heuristics are frequently founded on experience, one person will perceive risk very differently from another. This individualistic perception means a risk manager is potentially constrained by their own cognitive limitations. For example, if the assessor is less familiar with the hazard, they may perceive it as being higher risk. In addition, if the assessor perceives the risk as being out of their control and with possible significant impacts, again they may rate it as higher (this is known as the Dread Hypothesis). This is why you might see high-devastation events like terror attacks appearing frequently on risk registers. Using the same example, a number of recent terror attacks may mean the brain potentially recalls them more easily, and hence could decide that they are more likely to happen (Availability heuristic). So even as a reasoned risk manager, our assessments can be influenced by something as simple as frequent media coverage of particular events.

Furthermore, when assessing risk we tend to believe information that supports our own position or preconceptions. Known as Confirmation Bias, this means we often discount other information regardless of how accurate or relevant it is. So, if you are creating your crowd management plan autonomously, are you more likely to believe it will successfully protect your audience? Additionally, if you are updating an old crowd management plan then you could be subject to Anchoring Bias; a tendency to rate risk close to a pre-existing data point, even when this might not be the case.

At this stage I can almost hear the sighs of defeat as you wonder if you have ever made an uninfluenced, rational assessment in your life! Unfortunately, as a risk manager, we are not only predisposed to heuristics; we are also affected by a myriad of external factors.

These external factors can include the safety culture found in your organisation. Culture is formed from the assumptions, beliefs and practices of the organisation, and influences everyone involved in it. There are academic taxonomies of safety cultures, of which Westrum and Reason can offer more detail, but largely it can be said that if you are working for an organisation with a weak safety culture then you are likely to be influenced by this pathological attitude to risk. This organisation might not be enabling huge infringements of duty, but they could be permitting numerous minor violations like failing to enforce the use of high-vis jackets on site. Such actions can combine to reinforce a weak culture. Working as a risk manager, you must be aware of the subconscious impact this can have on the way you assess your risks.

As risk managers, the influences of society can also affect our assessments. Different people will identify vulnerability differently depending on their societal experience; so as risk managers we need to ensure we are using expansive vulnerability assessments. These categories should go beyond our staff and customers, and include less commonly considered heterodox groups. We might not agree with the political extremists or rule-breaking fence jumpers at our event, but we still need to consider if they are in harm’s way. Furthermore, those who we consider not to be vulnerable can actually become situationally vulnerable in different circumstances; those loud-mouthed teenagers who we wrote-off as trouble will suddenly become vulnerable when they end up caught in a circle pit with no understanding of the unwritten rules.

In short, risk perception is constructed from learned experience and external interactions, and will result in varying risk appetites across all stakeholders of your event. The police will have their own set of influences; the crowd have theirs; the Safety Advisory Group have theirs; and the promoter has theirs. However this is not an insurmountable challenge! The key is to understand we cannot eradicate these influences, only understand how they may impact us, and adjust to counteract them where possible. Integration and cooperation with different stakeholders at all levels of the organisation will help with this; always ask the question in multiple ways, to multiple people, and at multiple times. As Paul Slovic once said, risk does not exist ‘out there’, independent of our minds and waiting to be measured analytically and rationally; risk is subjective, impacted by external influences, and perceived uniquely by every person involved with it. To make us better crowd and risk managers, we must understand the influence on our assessments, so we can lessen their impact each time we assess risk.

Clare Goodchild – published in The Crowd Magazine Issue 4, 2020