Conference: 18th June 2019

 
Michael Bolton
 

★ Keynote: The Secret Life of Automation


Michael Bolton


Track 1 | 09:45-10:35


The Web is abuzz with talk about “automated testing” and “test automation”. Automation comes with a tasty and digestible story: eliminate “manual testing”, and replace messy, complex humanity with reliable, fast, efficient robots! Yet there are many secrets hidden between the lines of the story.


Automation encourages people to think of mechanizable assembly-line work done on the factory floor, but neither development nor the testing within it is like that. Testing is a part of the creative and critical work that happens in design studios, inventors’ workshops, and research labs. Although they can be assisted by tools, those kinds of work are neither “manual” nor “automated”.


User and tester actions can be simulated, but users and testers cannot be replicated in software. Automated checking does exist, but it cannot do the testing. While tools can help us, we must not lose sight of the important skilled work that people must do to use tools wisely and powerfully.


In this talk, Michael Bolton will reveal secrets about automation that people do not usually consider, disclose or discuss. He’ll present a vision for using tools effectively—one that puts the tester at the centre of testing work and the testing mission: finding problems that threaten the value of our products and our projects.


 
 
Andrew Brown
 

Artificial Intelligence - a human approach


Andrew Brown | AI Testing | General Level


Track 1 | 11:05-11:50


Introduction


Many people working in AI have a limited and shallow understanding of human intelligence. Some are dismissive of solutions that the human brain has developed. Others are intimidated by the challenge of unravelling the mysteries of the brain, so they treat the problem as a black box. However, both these approaches are an abrogation of responsibilities and lead to a failure to develop genuine artificial intelligence. Before we can hope to construct artificial intelligence, we must first understand what intelligence is, and we can only do this through understanding human intelligence.


Objectives


In this talk, Andrew Brown will use human memory to demonstrate why it is important that any attempt to develop AI must include an understanding of human intelligence. He will examine several different types of memory failure and demonstrate two things: Firstly, these apparent failings are better understood as effective adaptations to the data environment that our ancestors lived within and secondly, these apparent failings point to important problems that may not have been fully considered by many people attempting to build AI systems.


Outcomes


After having attended this talk, you’ll see how your brain forgets and distorts your memories without you ever being aware of it, why having a perfect memory would make you considerably less intelligent, and why understanding your own memory failures is so important to developing and testing AI systems.


 
 
Andy Glover
 

From Laptops to Lambdas – a test automation journey to the cloud


Andy Glover | Test Automation & Tools | General Level


Track 2 | 11:05-11:50


Introduction


In software development, quick and reliable feedback is like gold dust. Traditionally, testing is often described as the bottleneck for development and flow of information on the quality of software can be slow, with this, there’s pressure to ‘speed up’ testing. Test automation is seen as the silver bullet and managers and clients alike demand that testing is automated. Yet, the reality is not as easy as it sounds.


Objectives


In this talk, Andy Glover will share his experience leading testers in the journey from long manual test regression cycles to running automated checks in the cloud. Andy will discuss common pitfalls that affect the team’s chances of success, such as the steep learning curve in using some automation tools, the lack of time and inability to automate checks within a sprint, the long and brittle automated checks, as well as running automated checks locally and assuming automated checks are the same as testing.


To address these common pitfalls Andy will highlight some solutions used:

  • Using record and playback tools as a first step into check automation.
  • The smaller the test the better.
  • Constantly research new tools and techniques to make the testing quicker or easier.
  • Promote and champion the need to automate as you test.
  • Keep looking to push to the cloud.

Andy will also present a demo of how his team setup automated checks using Lambdas on AWS, which allowed ever increasing multiples of tests to run concurrently without having to manage a server to run the tests – it’s all done in the cloud, a real game changer!


 
 
Albert Tort
 

Smart analytics, artificial intelligence and cognitive models for testing!


Albert Tort | QA + Artificial Intelligence | General Level


Track 3 | 11:05-11:50


Introduction


Artificial intelligence comprises a wide range of techniques (machine learning, natural language processing, smart analytics, chatbots, etc.) that are currently used in various scientific and technical fields to assist manual activities and increase cognitive computing capabilities. In the field of software testing and quality this is a new reality, which also opens the door to new opportunities, to optimise testing activities that must become efficient within the software development processes. Testing more efficiently in shorter iterations implies, if possible, even more intelligence and skills, which should be based on the context and the capability to use available information.


Prioritising test cases to be executed, predicting the investment needed in testing, anticipating high risk functional areas, optimising tests to be executed, detecting traceability, designing effective cases and detecting potential duplications, considering as much information as possible when it comes to decision-making, automating tests with attention to maintenance are challenges in the field of testing and quality, where intelligence (both artificial and non-artificial) becomes key to overcoming such challenges. Part of this intelligence can be enhanced with artificial intelligence techniques, smart analytics and cognitive models, to increase our efficiency, anticipation and prioritisation in testing.


Objectives


Albert will explain the key elements of a platform in assisting testing and quality with artificial intelligence and automation of decisions and activities, using artificial intelligence techniques, smart analytics and cognitive models.


For the competition, Albert will determine who gets the best results: you as a participant or his testing assistance and QA bot as your competitor!


He’ll show you the importance of applying AI techniques in aspects such as the selection and automatic prioritization of tests, the prediction of functional areas of risk, the detection of potentially duplicated tests or the allocation of tasks and solving faults.


Outcomes


After having attended this talk, you’ll gain insight into:


  • Not lose time by load testing unnecessary cases
  • A set of artificial intelligence techniques that can be used to assist and improve the efficiency of software testing and quality
  • Cases in which cognitive models, predictive dashboards and automated actions can be applied in the field of testing
  • The need for a platform such as CognitiveQA to support the implementation and use of artificial intelligence in quality assurance
  • Innovations in testing by using machine learning and cognitive models for the automation of tests and other QA activities

 
 
Micael Gallego and Patxi Gortázar
 

Testing cloud and kubernetes applications


Micael Gallego & Patxi Gortázar | Testing & Tools | Advanced/Expert Level


Track 5 | 12:00-12:45


Introduction


Integration and end-to-end (e2e) testing of distributed systems, especially those deployed in cloud infrastructures or container orchestrators (like kubernetes), is a much more complex task than that of a monolithic application. Distributed systems require several services to be started, even for the simplest integration tests, and several tools need to be in place, like automated browsers for e2e testing.


Objectives


In this master class Micael Gallego and Patxi Gortázar, you’ll learn how the efforts required for: 1) testing such systems, and 2) doing root cause analysis in the presence of failures, which can be diminished by using monitoring tools like ElasticSearch.


The master class will start by explaining how to perform e2e testing over a docker-based application using Jenkins, a popular CI server. Micael and Patxi will go through the life-cycle of the application: starting, testing, gathering logs and metrics, stopping and analyzing results.


Then, they will resort to general monitoring tools like ElasticSearch and/or Prometheus, configuring and using them in the context of e2e testing. Finally, more specific tools, like ElasTest will be used, showing how root cause analysis can be greatly simplified and more effective.


Outcomes


After having participated in this master class, you’ll learn how to approach e2e testing of complex distributed applications deployed on cloud providers or container orchestrators, as well as which tools are best suited for this task, from general platforms to very specific ones.


 
 
enrique_almohalla
 

Data masking project Journal: GDPR compliant data for non-production environments


Enrique Almohalla | Test Automation and Tools | General Level


Track 4 | 11:05-11:50


Introduction


The test data unavailability is one of the most prominent causes of software delivery delays. Testers spend 50% of their time waiting for data. 86% of QA specialists say that create and manage test data is extremely hard. At the same time 90% of companies consider that GDPR greatly impacts their test data management processes. Not in vain, production environment is the main source for test data.

And it is not like it is impossible to use real data to test software and be GDPR compliant, but it is so complicated and expensive, that you better do not do it. To use real data, you need the owner explicit permission. Once you have it, you have to treat it as the sensitive data they are: you need adequate security measures, updated software, access record and, for example, if the data subjects exercise their rights of access, rectification, erasure and objection, they must be applied to every database where the data have been copied. It is way simpler and safer to anonymize (mask) the sensitive data.


Objectives


In this talk, Enrique Almohalla will explain what you need to consider in order to deploy a data masking process: challenges and decisions to make, what benefits to expect, other than regulatory compliance, and what risks to face. The data masking process will be presented as a first step of a rational and pragmatic path to testing efficiency improvement through test data management.

The presentation will go through topics like Sensitive Information Inventory, sensitive data identification and data masking policy design and subsetting strategy.

The session will lean on the empirical experience of the deployment of data masking platform Icaria Mirage, and will draw more general conclusions applicable to other technical environments.


Outcomes


After having attended this talk, you’ll have the key to GDPR compliance regarding the use of real data in non-production environments. You will understand how to face an identification and sensitive data masking project and overcome the challenges it brings. Finally, you will understand how to transform a legal requirement into an opportunity to design and deploy an efficient test data management strategy.


 
 
Joost van Wollingen & Ivo de Bruijin
 

DevOps: an unknown future for testers? Or an opportunity?


Joost van Wollingen & Ivo de Bruijn | DevOps | General Level


Track 1 | 12:00-12:45


Introduction


At bol.com, the largest online retailer in the Netherlands, they have spent lots of effort on creating faster feedback cycles. With the ambitious goal to release multiple times per day in mind, they've upended their organisational structure, to give teams maximum autonomy in their road to production.


Objectives


At bol.com, Joost van Wollingen and Ivo de Bruijn built shared internally open-sourced frameworks to reduce the time needed to bootstrap new services. All this was necessary to scale up to 60 teams, each of which have the operational responsibility for their own applications. They've created pipelines to spin up environments, for fast, isolated, autonomous testing. With DevOps seemingly down to a tee, what happens to the role of the tester? That is the question Joost and Ivo would like to explore during their interactive talk, based on their experiences at bol.com and the experiences of the audience.


Outcomes


After having attended this talk, attendees will learn about success factors for DevOps at bol.com, but more importantly they will discuss how these learnings translate to the context of other companies. The effect of these new ways of building and delivering software has a profound impact on the role of the tester. Should we all be looking for a new job? Go back to the school benches to learn programming and computer science? There seems to be a skill gap between what is asked and what is offered. But what is needed for DevOps success?


 
 
Antonia Landi
 

QA in an A/B-Test driven company - Why companies of tomorrow need QA Superstars, and how to become one!


Antonia Landi | Processes and Methodology | – General Level


Track 2 | 12:00-12:45


Introduction


Some of the biggest companies today use A/B tests to increase conversion, retention and a myriad of other vital business metrics. A/B tests have already been adopted by countless organisations and are rapidly becoming the cornerstone of any business-driven venture.


So how and where does a tester fit into that? How can you maintain a high level of quality if there are several versions of your product, all of which interact with one another and change the user experience at crucial points? How should you structure your QA department within an organisation that is focussed on constant delivery and iteration of A/B tests?


Objectives


Drawing from her professional experiences, Antonia Landi will share practices, insights, and difficulties in building and adapting a QA department within a company that strives to test every hypothesis. Antonia will talk about why a change of perception of what makes a great QA department is needed, as well as how breaking brand new ground enabled her to take part in innovating not only her own role, but also the role of QA within any A/B test driven company.


Outcomes


After having attended this talk with Antonia, you’ll learn why a traditional approach to QA is not suited to an A/B-test driven company, what the advantages and disadvantages of having to be extremely flexible are and why the role of QA must grow beyond that of someone who finds and reports technical faults.


 
 
Aurelio Gandarillas
 

The digital transformation of the tester through AI


Aurelio Gandarillas | Artificial Intelligence | General Level


Track 3 | 12:00-12:45


Introduction


Artificial intelligence is transforming the use of technology in all areas and logically, testing is being transformed by its use.


Objectives


In this talk, Aurelio Gandarillas will explain the key elements of a software development process, which has an integrated testing and quality software process supported in AI, such as:


  • Transformation of processes based on detection to processes aimed at preventing defects
  • Software product related decisions, made by AI based on Big Data generated from automation (DevOps)
  • Optimisation of the software development process through the proposal of improvements suggested by AI
  • Design and generation of automated test cases performed by AI
  • The digital transformation of the tester: From QC to QA and QA to the optimiser of the development process

Results


After having attending this talk, you'll gain a vision of a development process and in particular testing, where AI is incorporated right from the beginning.


Target Audience


Software development, testing and quality managers as well as testing professionals.


 
 
Alex Soto
 

★ Keynote:Testing is over. Find a new job!


Alex Soto


Track 1 | 14:15-15:05


Testers used to be the keepers of quality: without us, developers would just keep pushing bugs to production. Yet, the world has changed. Testing as we knew it is now an old way.


Testing is over: Developers are doing their own tests, Developers are doing their own ops, Testers are doing their own ops, and Ops are doing their own testing. All of this to be able to deploy code (or bugs) multiple times per day.


Automation is, however, just the tip of the iceberg, in the new testing era, you cannot depend on manual testing anymore to verify the correctness of your software, you need new techniques, that more likely fall into the operations side of things, to be able to release multiple times per day but without breaking production.


Desperate? Angry? Sad? Then join Alex Soto in his keynote talk and understand why and how you’ll find a new job. The answer might surprise you (or not).


 
 
Stefania Ioana Chiorean
 

Testing the new frontier of mobile web


Stefania Ioana Chiorean | Mobile Testing | General Level


Track 1 | 15:15-16:00


Introduction


When smartphones appeared, a new set of applications arose together with new challenges. The debate on the design side was to build them more mobile friendly, with looks fitting small screens but keeping productivity on the move. With so many devices in people’s hands, we realised there are problems beyond our system, such as missing connectivity, lacking infrastructure, low specs and so many others. A couple of years ago, the industry shifted to accommodate this and ‘offline first’ became the trend. A new way of testing and new set of priorities was on Testers’ agendas.


Objectives


In this talk, Ioana Chiorean will cover the technical specification of these 3 waves of apps and their properties, where for each, a new testing process is drawn-up. Ioana will address both mobile and offline changes as separate pillars for PWA testing. She’ll introduce accessibility testing, mobile web testing, performance testing, scalability testing and other types of both functional and non-functional testing.


Outcomes


After having attended this talk, you walk away with having learnt more about the mobile ecosystem – for both recent history and for first signs of the future. Types of testing like those previously mentioned will be explained, exemplified and best practices and lessons learned will be shared.


 
 
Erik Stensland
 

Robots solving complex tethered Android device automated testing


Erik Stensland | Test Automation & Tools | Advanced Level


Track 2 | 15:15-16:00


Introduction


The challenge we are solving with both software automation tools and robots is unique, we are testing Android devices that require true human interaction with the device. A human swiping, dipping or tapping a credit card, cannot be simulated with software automation tools.


We are advancing automated testing by using both software and robotic hardware to solve challenging automated testing scenarios.


Objectives


In this talk, Erik Stensland will cover areas such as framework architecture, robotic design and the integration of the 2 to create a symbiotic relationship. There will also be a live demo of the framework and robot working together executing a series of tests.


Erik’s approach involved starting with small robotic interfaces that could perform simple tasks and over time grew to full scale human simulated interfaces to the devices. In combination with UIAutomator and robotic software/hardware Erik successfully created an automation framework that allows performing complex automated testing on tethered android devices with high reliability.


In this talk, you’ll see real world examples of robots performing automated testing, a straw-man process of how we crawled, walked and ran during the building out of more and more complex testing capabilities and how complex testing solutions can be automated when thinking outside the box.


Outcomes


After having attended this talk, your imagination will open to what can be done using inexpensive robotic hardware and software to create automated tests, where it might have been impossible using just software automation tools.


 
 
Carlos Machado
 

The importance of assessing the impact and risk of change within the software development process


Carlos Machado | Software Analysis Tools | General Level


Track 3 | 15:15-16:00


Introduction


Years of development using bad practices, ultimately transform systems into real black boxes. When presented with this scenario, guaranteeing that all teams involved within development, while making changes, have the necessary and precise information to plan, execute, control and carry out tests becomes the challenge. So, how do we overcome this challenge?


Objectives


In this talk, Carlos Machado will show you what information you should obtain about your software systems, when evolving and managing development within a dynamic process. Having such information available to certain people in your organization is of fundamental importance and is key to reducing risk, minimizing costs and thus guaranteeing the agility and success of them.


Outcomes


After having attended this talk, you will gain insight into, from a change perspective, the information that should be collected, how to obtain it and how to use it to identify and analyse precisely impacted elements.


The correctness and accuracy of this information is vital and key to correctly assessing, in all its aspects, possible scenarios for moving towards high quality software and in turn to be able to make more informed decisions.


Carlos will also show you solutions and advanced software analysis tools that help you to gain a competitive edge in the market.


Target Audience


IT Leaders (Product Owners; CIO; Solution Architects; Project Manager)


 
 
Luis Fernando Estévez
 

API Testing with Postman / Newman


Luis Fernando Estévez | API testing | General Level


Track 4 | 15:15-16:00


Introduction


According to the World Quality Report 2018-19 the trend of the use of microservices and APIs is continuing to grow. Large applications are divided into smaller pieces, more manageable, simplifying development, testing and deployment. And allowing, in addition, a more frequent increase in improvements, with lower risk, greater flexibility and a better time-to-market.


This in turn, requires tests to run more and more at the API level, rather than at the user interface level. The growth of API test automation also continues and we are probably going to see an increase in this activity in the coming years. One of the most widespread tools among developers and testers is Postman, which claims to be used every month by 6 million developers and more than 200,000 companies to access more than 130 million APIs.


Objectives


In this talk, Luis Fernando will discuss management, design and execution of API tests using the Postman / Newman tool. He will begin by introducing the concept of API and its different types, as well as its importance of context. He will also review current tools that the market offers, comparing some of them, such as HP UFT or SoapUI.


Then, Luis will show you best practices for designing and executing API tests using Postman, starting from the creation of call collections, global variables, environment variables, data for iterations, etc. and paying special attention to how to chain calls to various APIs and how to effectively verify that the response fits the expected result, as well as discussing the limitations of the tool.


Finally, Luis will talk about the unattended execution of API tests designed with Postman using Newman through the Windows command line. You’ll see how this facilitates the incorporation of tests into a continuous integration pipeline, such as Jenkins, and how to store and automatic analyse results.


Outcomes


After attending this talk, you’ll will be able to:


  • Differentiate between the different types of APIs and the main tools available in the market for testing.
  • Design collections of API calls in Postman, including environment variables, global variables, etc.
  • Link API calls in Postman and include tests within them that effectively verify received responses.
  • Run Newman tests designed in Postman and include these in a pipeline of continuous integration.

Target Audience


This talk is aimed at software QA professionals with an interest in API testing.


 
 
Javier Ruano Rodriguez
 

Communicating Test Results: Testers in the tower of Babel


Javier Ruano Rodriguez | The Human Touch | General Level


Track 1 | 16:30-17:15


Introduction


Software projects bring together cross-disciplinary professionals with different backgrounds, knowledge, and individual objectives, creating a tower of Babel, where testers face many communication challenges. On a daily basis, testers exchange information with developers, act as the bridge between the development team and the business as well as interacting with external teams like operations and customer support.


As in the story of the tower of Babel, a software project can fail because people do not effectively communicate with each other. Miscommunication can result in either a lack of information or even misleading information. Both situations can easily lead to wrong decision-making, negatively affecting the project.


Objectives


In this talk, Javier Ruano will share his own story about bridging the communication gap between various stakeholders and succeeding to speak the same language with them. Javier will present and analyse key communication impediments that can jeopardize any project, and show effective ways to overcome these barriers.


Javier will also explain the process that he followed to develop an effective reporting plan. Including the milestones involved in this journey, the obstacles and alternatives employed to meet his objectives, such as stakeholders’ needs assessment.


In addition, Javier will elaborate on the importance of understanding the context of our stakeholders before providing them with information. This means understanding their needs, interest and objectives to align reports and address concerns and expectations.


In summary, this is a talk about the what’s, why’s and how’s of successful communication in our software testing world.


Outcomes


After having attended this talk, you will be able to understand the importance and implications of communication, as well as to reflect on other professionals’ context and perspective to achieve effective communication. You will grasp the concrete strategies to assess stakeholders’ needs and develop customised communication approaches.


 
 
Maik Nogens
 

Testing Virtual Reality - The Trinity of Testing


Maik Nogens | Cool Stuff | General Level


Track 2 | 16:30-17:15


Introduction


While software and hardware as testing areas are kind of “known”, in VR the immersion of the human in the middle is “new” and needs different testing approaches. How the human reacts physically and psychologically brings new challenges. From motion sickness over whole immersion to the dulling of the tester, these are new problems, where testers also become a part of the product (as do users).


Objectives


In this talk, Maik Nogens will give an introduction into the field of xR (Cross Reality), with the focus on VR (Virtual Reality) and what will change when it comes to testing such applications. Maik will also give a broad overview of the hardware and software aspects of VR, the main part will be about the human experience of body and soul, in relation of an immersive VR experience.


Outcomes


After having attended this talk with Maik, you’ll gain insight into how testing VR applications need new ideas and a review of existing testing knowledge. Some techniques might be more useful to apply for VR, some can still be used but we also need a whole new approach to testing the whole xR spectrum.
This talk focuses on VR and what is “new” for testers.


 
 
Organised By
nexo QA