In this tongue-in-cheek keynote talk, Seretta Gamba will show how easy it is to disrupt or utterly ruin a test automation project. By describing seven proven methods to reach this end, she’ll intend to give managers, testers and automation engineers the means to recognize early on if their automation is in danger. And by ‘warning’ against possible defences, she’lI give you the tools to counter and solve such issues.
In this keynote talk, Seretta will give you insight into, firstly, how to recognise dangerous issues before they ruin your test automation and secondly, and most importantly, how to apply solutions that have already worked for practitioners in similar contexts.
From this keynote talk, you’ll:
- Become familiar with test automation issues before they do real harm.
- Understand a set of proven solutions to resolve such issues
- Know how to access and use the test automation patterns wiki
Testing is now-a-days a crucial part of software development, but if it comes to good programming practices a lot of teams have issues creating sustainable and maintainable tests. Learning to use testing tools is sometimes often not enough. It is also necessary to create a framework that takes care of certain concerns.
In this talk, Sven Kroell will describe how a person responsible for automated checks can create a maintainable and scalable architecture. He will introduce different patterns which are commonly used in the industry and explain the implementation of the Page Object pattern, the Repository Provider pattern and how you can create a simple DSL which is used in your tests to keep them clean and readable.
His examples will be in Java and Espresso, although the structure is portable for various other solutions like Selenium and Appium.
From this talk, you’ll gain insight into: - How to set up a proper architecture for your test automation framework - How to move your current codebase to a more scalable design? - Why you should do this and how you can sell all this to your product owner.
Biases are systematic thinking errors that we tend to make. It is inherent to our jobs as testers that we collaborate with very diverse groups of people. Some of them may be biased against you - we all know the stereotype of the tester as the nosy intruder who comes to point out everyone's mistakes. But have you ever turned the mirror towards yourself and considered that YOU may be biased as well? Your testing processes may be affected by biases. And even technology itself can inherit their creators' biases.
After having worked in both a big multinational company and its opposite - a tiny start-up, Lina has realized that the most rewarding progress she has made as a tester was recognising that she has her own biases, and then learning how to manage them. As a result, she has been able to collaborate with her co-workers more effectively, setting aside her own predispositions and allowing herself to take a step back and look at things more objectively, thus ensuring the quality of her compàny’s products. She realized that all work is affected by biases - be it directly or indirectly. This led her to a lot of great books, conversations and discoveries about biases.
In this story-fuelled talk, Lina will share practical tips with you on how to recognize your biases, deal with all kinds of professionals and make sure that your testing and the technology you are testing is not biased.
Learn how to introduce Watson's cognitive and analytical capabilities into the DevOps application development lifecycle.
Most of the time, we need to review the changes that have been made in our test environment which can be quite cumbersome but wouldn’t it be good to have an assistant who tells us the kind of tests that should be launched to verify and validate the changes made for a given compilation?
Even better if we had a predictive analytics and dashboard service for our tests that recommends when would be the best time to test! Now this is possible in a more agile way than ever before, thanks to the cognitive and analytical capabilities of Watson.
In this talk, Javier Lisbona, IBM’s DevOps leader, will share with you his DevOps project "DevOps: A Smart Life-Cycle" and show you how to give added value to your DevOps life cycle with cognitive capabilities.
Negative Tests are often tricky to automate as they are different than automating positive/happy path tests.
When negative tests are initially created, they are often quite productive but over time as the application/testable product changes, negative tests may result in false positives or become altogether irrelevant!
Moreover, negative tests are often ignored as they are too improbable or their set-up and creation is too costly. However, the defects resulting from the lack of negative testing are costlier to fix, in terms of time, money, or reputation. For this reason, we should show more love and care to negative test automation.
In this talk, Tuhin Mitra will share her experiences using real-life project examples that helped her come up with concrete negative assertions, assertions that were relevant to the expected behaviour of the application. He’ll share real and pseudo code examples to demonstrate these assertions and will describe the lessons he learned over time, resulting in better negative assertions.
From this talk, you’ll understand negative test automation and how to convince your team to write negative tests regularly. You’ll also gain insight into the real-life challenges faced when automating negative tests and the possible solutions to overcome these challenges.
Testing in the Hundred Microservices World: When the Testing Pyramid becomes an hourglass
Since the industry is moving from monoliths towards distributed systems, we went from testing monoliths applications and its challenges to testing integrations of hundreds of microservices, which comes at a price: brittle, slow, and infra dependent tests.
How can you have robust, fast, reliable and well-scoped tests? How can you be the first one to know about problems in production? And how can you guarantee a rollout plan that affects the least number of customers possible if something goes wrong?
In this talk, Isabel Vilacides will discuss her journey to come up with a testing strategy that was able to protect customers from dozens of releases per day of different services at the same time. She will touch base on key aspects such as integration testing, more precisely its difficulties and how contract testing is a solution for it, monitoring as the new testing, and safe rollout technics such as feature flags, experimental update centres and dogfooding.
DEISER DevOps: Going live 10 times faster the Atlassian way
In lean organizations where software operations are not formalized in dedicated teams, it's very common for product developers to encounter heavy release barriers that postpone code from being deployed for days, sometimes even weeks. A proper DevOps discipline can become particularly important, especially when it comes to test automation and QA procedures.
In this talk, DEISER's Product Manager and QA Manager will present a unique use case of the Atlassian stack in a customized development process that has allowed to launch releases in seconds, when it used to take days.
From the talk, you will learn how to save hundreds of hours and work smartly with a fine-tuned Atlassian stack. Key takeaways include:
- Defining and automating an issue type "release".
- Creating Jira workflows to control Bamboo plans, builds and deployments.
- Establishing release transparency across the company.
Master Class: Automating embedded software tests using Robot Framework
Embedded software is usually identified with products such as IoT, however it goes further than that and in the industry its use is far more extensive and usually more critical. Some requirements are especially hard to test in this type of system due to their complexity and often custom made scripts are used to test these systems but only manage to touch the surface when automating.
David Barba and Alejandro Izquierdo have found an alternative to traditional scripting for automating tests in embedded systems: Robot Framework. It’s a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It has easy-to-use tabular test data syntax and uses a keyword-driven testing approach. Its testing capabilities can be extended by test libraries implemented either with Python or Java, and users can create new higher-level keywords from existing ones using the same syntax that is used for creating test cases.
In this master class, David and Alejandro will explain the fundaments of robot framework for creating automated test cases and how keywords can be used as libraries (directly in python) and as resources (recombining other keywords of the framework). They will also present the libraries they developed in Python that allowed them to adapt the framework to their own needs and how they were able to tools, such as HiL (hardware in the loop simulation that provides a platform by adding complexity, through mathematical representation, under control of the test platform) as well as other hardware like oscilloscopes, for their test automation.
In the final part of the master class, David and Alejandro will show how to generate and interpret reports, how to integrate robot framework in an open source automation server, like Jenkins, and how it can contribute to a continuous delivery hub for any project.
From this master class, you’ll gain insight into good practices for automating tests, including creating set-ups and teardowns for test cases and test suites. You’ll also gain an understanding in how to create your own keywords and libraries, in Python, to maximize the possibilities of adapting the framework to your own needs.
As the software tools and libraries that David and Alejandro will use are open source, make sure to bring your laptop for this master class to interact and follow the examples that will be presented.
In the past, there was a traditional way to manage test data: subset creation from production environment and, in some specific contexts, data masking for sensitive data. Even being a time-consuming task, that approach has been used in many enterprises. The scenario was not catastrophic since each software release lasted several weeks (or even months) to go live… although the scenario was far away from a perfect one.
Within DevOps contexts, things became worse. Each software component is created usually in just a few hours and deployed using a Continuous Delivery pipeline. Inside the pipeline, each component needs to be tested within a few minutes.
For this reason, traditional approach to managing test data is now no longer feasible in such contexts. Even assuming a quick way to refresh production data in test environments, it’s not enough since production data doesn’t contain future scenarios for effective testing.
In this talk, Filipe Nuno will discuss new trends in test data management (TDM) to overcome typical test automation bottlenecks, namely addressing synthetic data generation (to cover negative and unusual test cases). He will explain what a test data warehousing is (production and data generation for a gold copies) and how to reuse data stored in a test data warehouse. He’ll also explain what a self-service portal is with user-defined criteria and much more.
From this talk, you will understand how to use a return on investment (ROI) approach to managing test data, based on reducing time and resources for data provisioning and manageable subsets of data (reducing your test database and CPU consumption in test environments) and consequently, you can reduce infrastructure costs and improvement overall software quality.
Are you a tester? Then you know the famous "it works on my machine". But if you are a mobile tester, then you probably already heard that "it works on the emulator"! However, even when it works on your test lab devices or on your phone, does it works in the wild?
Field testing is critical for the validation of any mobile application when it comes to connectivity, location or even devices and platforms. So, how to do it then? The most common solution is crowd-testing... But what to do if it passes on the crowd tests but it fails for your end users? Then the fun begins. You will need to go out and do the field testing yourself!
In this talk, Joel Oliveira will share the lessons he learned from a real field testing experience in a different country and with geographically distributed teams!
Artificial Intelligence is now starting to be present in one form or another in the offerings of today’s largest software companies. With this, many platforms that aim to make quality assurance processes based on AI techniques are also becoming part of the scope of testing AI. However, at present, the actual results of such techniques and platforms are still quite limited and need to do far more. Saying that, even though the results are, at present, short-term and limited, if you don’t start with AI, you’ll soon be left behind!
From a functional coverage point of view, it’s not about the actual test process but the vision of quality provided when managing testing processes. That is,
- Project quality prediction - Test case optimization - Test coverage definition. - Natural language for the test definition.
From a technical perspective, few companies have sufficient historical data, that is properly modelled, for learning processes to generate precise and effective results.
So, is AI for testing a worthy investment?
In this talk, Pablo Sánchez will explain why the answer is definitely YES and how you can start enjoying the magic of AI for testing.
★ Keynote: Let’s stop talking about Testing, let’s start thinking about value
We hear a lot of talking about quality, testing and preventing bad software from getting to customers. What do people need to solve their problems? We need to move away from the idea that finding bugs solves all our quality problems. We cannot test quality in, so we need to do better and a lot more.
From our first days in IT, we have always heard discussions about whether we need testers or not. Testing is always slow, testers are cynical people who deliver bad news and most of all they do their utmost to slow-down delivery. But maybe we need to stop this discussion and take it to another level. So, are we now saying we do not need testing anymore?
Not at all! But do we need to talk about it? Many people just do not understand testing and what testers do. As testers, we need to up our game and talk about it. Maybe we need to find another approach to deliver the value our clients need. How do we do this?
It is not about testing methods or approaches, these do not matter if our mind-set is not focussed on delivering value. The key to this is the way we think and the way we communicate about what value is. It is about bringing people together: being able to communicate effectively, building bridges about overall quality.
In this keynote talk, Alex and Huib will share their view on the role of testing and quality in modern software development.
In stepping into a role in his current company, Adam Knight was presented with the challenge of a lack of consistent understanding across the business of acceptable risk levels for development and came up with a technique that he adopted from financial services to raise the subject of risk within his new company and drive towards a consistent understanding of what "tested" means to a business.
In this talk, Adam Knight will examine the nature of individual risk perception being linked to our evolutionary biases and how concepts such as 'risk compensation' and the 'availability heuristic' steer our risk perception. He'll investigate how these might explain why testers and business leaders will inevitably differ in their risk assessment of any situation.
Adam will also explain how he addressed this difference and adopted an innovative technique to measure the risk appetite of individuals - a risk questionnaire. He’s then show you how he adapted the approach for software development and the insights that it provided into perception of risk in his company.
After this talk, you’ll understand why people’s risk perception skills are not designed for the modern workplace - an awareness of why testers and business leaders inevitably differ in their perceived risk. You also gain a look at using a risk profiling questionnaire to help to understand those differences.
POCs - a different approach in building testing strategies for test automation
Initiatives of new test automation approaches within your company are subject to many unknowns and most initiatives suffer fatal setbacks. One solution in the test automation world, proof-of-concepts (POCs), a piece of software, developed by us the testers, which allows us to collect the maximum amount of learning in a specific context with the purpose of making a point to our stakeholders (team, managers, clients).
While building and presenting proof-of-concepts is not only about opportunities and creative genius, taking the approach in the area of software testing contributes decisively to the success of test automation journeys. When talking about test automation strategies, the lean start-up philosophy enabled a shift in the way Andrei Contan was perceiving test automation and helped him build up significantly more trust with his stakeholders. He discovered that starting with brainstorming sessions, to creating a minimum viable product or minimising the total time through the build-measure-learn feedback loop, are the stages which contributed to the success of his test automation strategy efforts.
In this talk, Andrei wants to share with you the key learnings of his journey in applying the lean start-up mind-set when building testing strategies and POCs. He’ll show you the steps to identify business needs that can then be translated into measurable objectives using testing strategies. You’ll discover an approach of building sustainable testing strategies around the goals you envision as well as ideas for implementing POCs to support your testing strategy that enables people to have a better understanding about the benefits of test automation.