Entry Guide
The following are comments gathered from previous Judging Panels after they had reviewed all entries. It may help you in deciding on what information to include in your entries.
As a whole, the entries did not take into account criteria as much as the Judging Panel would have liked. Additionally, the entrants should remember that they are being judged by a panel of industry peers and so would do well to pitch the entry to the target audience. A few entries were considered “overly simplistic.”
Key tips from judges on the 31 Media Podcast
Playlist
3 Videos
Strong Entries:
- Emphasised the business importance and criticality of the project, and clearly identified what was innovative about the approach being adopted.
- Focused on security testing and gave detail on tool evaluation, plus approach to overcoming challenges. Good integration in to delivery and security testing principles.
- Clear project goals coupled with quantified outcomes/successes; adaptability to mitigate unexpected challenges/risks and the utilisation of a wide variety of testing approaches and techniques that aligned with the complexity of their environments.
- Very explicit in drawing out the use of agile principles and how they were deployed and adapted throughout the project. It is also valuable to have concrete examples of processes that worked well, and of lessons learned from mistakes.
- Shown the biggest number of treats/characteristics one would expect from someone with good people management and communication skills: Mentorship (not only for people within his company), Coaching, being a role model, approachable and supportive, valued-respected-listened by people, dedicated to team well-being, focus on skilling people up.
- Evidence of overcoming challenges was clear, they have also described how they were committed to best practices and their selection of methodology to support their automation objectives along with evidences which made their application standing out from the rest.
- Cleary researched and implemented best tech to test ALL the systems, NOT just the software based backend systems.
- Very clear evidence of the project methodology and justification of tech choices. Very clearly detailed description of understanding the stakeholders needs, the importance of the project and the overall goals. In addition employed a combination of technology to solve some very difficult problems. Demonstrated some very innovative use of tech to deliver a complete testing solution.
- Explained well the approach to selecting tools, and the reasons for selecting them. The selection of the tools themselves represent best practice test approaches, and the results speak for themselves. In addition, the approach to synthetic data creation was good. The testing has clearly been challenging, and some of the unique solutions implemented, along with the clear narrative on the reason for the choice, are very exciting.
- Give context to metrics to fully justify their inclusion.
Weak Entries:
- Did not describe challenges very well, talking about business, project, or architectural challenges – rather than challenges in their automation journey. No sufficient details around how they were testing to take a view on the quality of the test scripts. Projects were unable to justify their choices in selecting an implementation approach.
- Did not cover all the criteria’s defined, which made it a slightly weaker application.
- Lacked details around the approach to testing. Diagram summarised tools used but limited detail on what challenges were encountered and how those were overcome through the use of automation.
- Not demonstrating evidence of delivering to time, within budget or engagement with stakeholders, neither reflecting back on goals to establish ultimate project success.
- Concentrating on the merits of the tool or method rather than the actual project deliverable is less likely to be scored highly.
- Not justify the reasoning behind including metrics.