Heads up! To view this whole video, sign in with your Courses Plus account or enroll in your free 7-day trial. Sign In Enroll
Well done!
You have completed Dev Team Show!
You have completed Dev Team Show!
Preview
In this episode, James Churchill talks with Craig Dennis, Nathan Menge, and Doug Darst about QA testing automation.
Mentioned References
- Software Testing - Wikipedia
- Unit Testing - Wikipedia
- Integration Testing - Wikipedia
- Black-Box Testing - Wikipedia
- White-Box Testing - Wikipedia
- Subject-Matter Experts (i.e. SME or Business Domain Experts) - Wikipedia
- Software Requirements - Wikipedia
- Waterfall Model - Wikipedia
- Test Case - Wikipedia
- Manual Testing - Wikipedia
- Test Automation - Wikipedia
- Cucumber
- Behaviorial-Driven Development
- Selenium
- Selenium IDE
- Coveralls
Related Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign upRelated Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign up
[SOUND] Hi, welcome to the Dev Team Show.
0:00
My name is James.
0:05
In this episode, we're gonna be
talking about QA testing automation.
0:06
Joining me for the panel discussion is
Craig Dennis, who's a teacher here at
0:11
Treehouse, Nathan Menge, who's
a guest teacher here at Treehouse and
0:15
Doug Darst, who's the director
of engineering at Treehouse.
0:20
Welcome to the panel.
0:23
>> Thank you.
>> Thank you.
0:25
>> Thanks Jed.
0:25
>> So I thought we would start by
doing a little bit of a level set.
0:26
And let's make sure that we all understand
what we mean by QA testing within
0:29
the context of this conversation.
0:33
So Doug, what do we mean by QA testing?
0:35
>> Well, QA is really a measuring body.
0:38
They measure the software
against requirements.
0:40
And those requirements might
be functional requirements,
0:43
meaning what is the software
supposed to do?
0:46
And they might be
non-functional requirements,
0:49
meaning how is the software
supposed to do that?
0:51
So in terms of performance,
or scalability, or security,
0:54
quality assurance is really concerned with
the processes that you put in place and
0:57
methodologies that you put into place
to ensure that you have quality
1:01
built into your product as early
as possible in the process.
1:05
As compared to quality control, which is
really at the back end of the process,
1:10
where your trying to prevent
bugs from getting out the door.
1:13
So quality assurance is really that up
front proactive piece of the puzzle.
1:16
>> Okay, so, we talk a lot about unit
testing as developers here at Treehouse.
1:20
So is unit testing part of QA testing?
1:25
Or is that something different?
1:27
>> Well,
it all falls into the whole testing realm.
1:29
So every piece of testing is good testing,
right?
1:32
But typically,
when we talk about quality assurance,
1:35
we're talking about looking at the
requirements, and running a specific test
1:38
to exercise the software to ensure that
we're meeting the customer's needs.
1:42
So unit testing and
1:46
integration testing is typically performed
by the developers against their code.
1:47
And it's done in smaller pieces, so atomic
pieces, testing individual functions or
1:52
methods.
1:56
Whereas QA testing starts to focus more on
the system as a whole in testing, making
1:57
sure the product in it's final form really
meets the needs that it's intended to.
2:03
>> So does that always black box testing,
or is it black box and white box testing?
2:08
>> Well it can be both and
companies should consider doing both.
2:14
Black box testing means that you're
sitting down at the computer like the user
2:19
would, and you really don't have any
information about how it was designed or
2:22
implemented behind the scenes.
2:26
So all you see in front of you is
the user interface, and you're testing it
2:28
based on your understanding of what
the software is supposed to do.
2:33
White box testing,
2:37
on the other hand means you have
that information behind the scenes.
2:38
You have access to the code, typically.
2:40
You can see how it was implemented,
all the air checking that might be built
2:43
into the code and
you can test it from a different angle.
2:47
So both of them have pros and cons.
2:50
>> So Nate, are developers involved
in QA testing or is this sometimes
2:51
like a different type of person on the
team in terms of experience and skills.
2:58
>> It depends on what size and
budget your business has.
3:03
If you're a small shop
with just a few people,
3:08
you'll have people doing multiple roles,
coming at it with developers,
3:11
going through their code with just
a mind to what the user experience is.
3:16
Whereas if you're a large corporation, you
might have a large staff that is entirely
3:21
there to run through it with no knowledge
of how the underlying system is built.
3:26
>> Okay, so
there might be different kinds of tests.
3:31
So you have testers with
development experience,
3:35
and testers who may not do any
development work, who may be like,
3:37
would be fair to say they'd have their
business experts or domain experts.
3:41
>> Yeah, exactly.
3:46
If you consider like an accounting system,
for
3:47
example, you probably wanna
a little bit of both.
3:51
So domain expertise would be really handy,
because you'd want to have people who were
3:54
actually maybe accountants in
the real world, who can come in and
3:58
understand how an accounting system
is really supposed to be behave.
4:00
And they bring a certain
level of expertise
4:03
from a business standpoint that
you really can't get otherwise.
4:06
So they might be sitting down testing
the accounts payable module, and
4:09
they know how invoices
are entered into the system.
4:13
And they can test that mission
critical functionality very precisely.
4:16
But on the flip side, they might not have
the technical skills to do test automation
4:20
and some of those types of activities.
4:24
Conversely, people with a technical
background can do the automated testing,
4:28
but they probably don't have that deep,
deep subject matter expertise.
4:33
>> Yeah, I think that one of the things
that comes into play with trying to
4:37
decide who's doing what,
is the development cycle itself.
4:41
Like how long, does it take.
4:43
You don't have a team waiting or
testing a release ahead of time.
4:46
So yeah, I have seen the short release
cycle work really well with support teams,
4:51
because they're critical for
this is how the product needs to work.
4:55
We need to make sure that
this works no matter what,
4:59
test this right here sort of thing there.
5:01
>> So you mentioned that that QA testing,
quality assurance testing,
5:04
is something that should be done early and
often.
5:08
So how early?
5:11
Does that happen pretty much at
the beginning of the project throughout
5:13
the lifetime?
5:16
>> Yeah,
I would suggest that it really should be.
5:17
Even before the first piece
of code that's been written,
5:20
you can be doing testing
on the requirements.
5:23
When we talk about quality
assurance being proactive,
5:25
you can look at the requirements and
find potential bugs that you can prevent.
5:28
And obviously the sooner that you
find the bug or potential bug,
5:32
the cheaper it is to address.
5:35
So when you're reviewing requirements,
it might state something like
5:37
provide the ability to report on
a list of employees in the system.
5:41
But you could start asking
questions around well how
5:46
quickly should that be done?
5:49
What if there's 1,000 employees?
5:50
What if there's a million
employees in the system?
5:52
What should the performance be?
5:54
And if there's a wait,
what should the UI experience be?
5:55
So you can start fleshing
those sorts of ideas and
5:58
concerns up front, rather than after that
code has already been implemented, and
6:01
it's more costly to address.
6:04
>> Yeah, it's interesting,
cuz in my experience,
6:06
I can definitely vouch for
having more of a waterfall approach.
6:08
Where the development team is kind of
churning away at making the application or
6:13
the website, and maybe what I would
describe as informal QA testing going on.
6:18
Where the developers
are kind of poking and
6:23
prodding at it and
making sure things are working.
6:25
But we work until we're like, all right,
now we feel like we have a release.
6:28
Whether that's something that could
actually go into production or
6:33
not may be debatable, but
it's some form of milestone, right.
6:35
And we push that out to a staging
server and then say, all right,
6:39
now that person's in QA.
6:43
So what you're talking about sounds
a little different from that in terms
6:45
of process.
6:48
>> Yeah,
I think it's really important to get QA,
6:49
whatever that might look like in
your company, involved early on.
6:52
Just again, to find those bugs or
potential bugs early in the process and
6:56
starts with the requirements.
7:00
And then, you don't wanna wait
until the whole feature or
7:02
application is developed
before you get it into QA.
7:05
You wanna be testing in an iterative mode,
7:07
giving that feedback to
development quickly.
7:09
One, because the developers' minds are
kinda wrapped around that piece of code.
7:11
And so
there's not a lot of context switching as
7:14
compared to if you wait a month
before QA starts getting into it.
7:17
>> Right.
7:21
>> And it's just kinda good practice to
kind of be working in an iterative flow.
7:21
And that way you're building quality and
testing for quality the whole way.
7:26
So what is?
Okay,
7:31
so QA testing still feels
a little nebulous to me.
7:32
What does that actually look like, Craig?
7:36
In your experience, what does QA
look like from a manual prospective.
7:38
Is that a room of people or
people sitting down in a room and
7:42
banging away at the app, or?
7:45
>> Right, I mean, that definitely can be.
7:47
Like you just talked about, I think,
you push something up to release.
7:48
Here's a new feature that you have
all talked about and thought about.
7:51
And, so I think that that manual
QA can be done by stakeholders,
7:54
like Doug was talking about.
7:58
People who are interested in this is
the feature that I requested, and
8:00
it's coming alive and
I'm going to test it and I know it.
8:03
I think that that's
a good manual QA place.
8:05
I think there are definitely people,
there are definitely companies that still
8:07
have an entire QA team dedicated
where the thing will come to them and
8:11
they will do manual tests.
8:15
And they'll have paper check boxes even,
going through that sort of thing.
8:17
>> To describe like these are our test
cases and when we do a testing cycle,
8:21
we split these up and
we are gonna run through these actions.
8:26
>> Yeah, and I think,
I mean I've seen a very like you know,
8:29
there's definitely automated QA,
but I have seen very successfully,
8:34
sites that are going live multiple
times a day, cuz continuous deployment.
8:39
There's a, I need to make sure that
the thing that makes us money is working.
8:45
Yes, it does.
8:49
Okay, good, go ahead.
8:50
>> [LAUGH]
>> The deploy's good.
8:50
Yeah, that sort of manual QA I
think is always, probably,
8:52
I don't know of anybody really
who's ever fully given that up.
8:55
Hey, the site went live.
8:58
We need to make sure
that that thing that's-
8:59
>> Putting eyeballs on the thing, and
9:02
actually you get that sense of okay, yeah,
9:03
I can see that it's working and
we're good to go.
9:06
So Nate, what are some of the pains
of doing this manual process?
9:09
That sounds like kind of arduous work.
9:13
>> Yeah, so the major pains
are just the repetitiveness of it.
9:17
If you're working with a product that's in
multiple browsers on multiple operating
9:20
systems, you have to literally
do the exact same thing over and
9:24
over again across all, possibly across
all permutations of these platforms.
9:26
Some of the pains are just the amount
of output you might end up with.
9:32
I've worked for
a major company that's FDA regulated, and
9:37
I can tell you exactly what
the QA testing looked like,
9:41
because there's this half-ream of paper-
>> [LAUGH]
9:43
>> Because it had to be printed out,
9:44
signed
>> Wow.
9:46
>> With actual signatures,
verifying that we did the work.
9:47
And that these individuals are responsible
to say that this is quality.
9:49
And it's filed with the FDA.
9:53
So every single test-
>> As you were doing test cases,
9:54
you were signing that saying that
saying I tested this and it passed.
9:57
>> Yeah.
10:01
>> Wow, okay.
10:02
It was all done at the end but.
10:03
>> Right.
10:04
>> Yeah.
10:05
>> So how bad can bad get when
you are doing manual testing?
10:06
Does anyone have a horror story,
in terms of what that cycle looked like?
10:10
>> So same company I work for, I worked
on a product that was in 14 languages,
10:15
deployed not only in those languages but
10:20
in regionalizations that
differ per language.
10:23
So you're dealing with,
already just localization and
10:26
regionalization of 30 plus cases.
10:30
And then on top of that,
operating systems and browsers,
10:32
so there were permutations
across 100 plus variables
10:37
that literally took months
of testing by a group.
10:42
That was not small, probably 20 at
its smallest to 50 at our biggest,
10:46
of people just working months at
a time to verify this for release.
10:51
And we're FD regulated, so
10:56
we're always on the heavy side
of what we need to sign off on.
10:57
But by the time we're done testing,
the next product is ready to ship, and
11:00
we have to start all over again.
11:04
>> And repeat?
11:06
>> This will repeat the cycle.
11:07
>> Wow, I can imagine that with,
11:08
for instance,
it's like mobile devices these days.
11:10
The permeations that you have to test
just keep expanding and expanding.
11:13
It's not probably even a linear increase,
right,
11:17
it's probably exponentially increasing.
11:19
>> [LAUGH]
>> Every time you add a device and
11:21
it has any sort of flexibility,
different multiple
11:23
browsers on the same Android device,
that's already dozens of permutations.
11:25
>> So let's talk about automation then,
11:30
because we've mentioned it
a little bit here and there.
11:32
Automation can help a lot,
but what does that look like?
11:35
What are we talking about when we say we
are gonna automate the QA testing process?
11:39
>> Well, it can really vary.
11:44
It depends on what you have as a product
and where you wanna go with it.
11:46
If you have the people, you can dedicate a
team to just start going through your mail
11:50
processes and writing code to just
repetitively knock out these cases.
11:55
But a lot of companies don't have
the staff to have a dedicated team, so
12:00
you've got a small, manual group of
people who are in their spare time,
12:04
which doesn't really
exist in the first place,
12:10
trying to automate some stuff from
the side to get through their workflow.
12:11
>> Right, that's if you have an existing
application with an established QA
12:17
process, what you're suggesting is you
kind of almost moonlight this switch
12:21
over to automated testing, and you just
kind of slowly chip away at it and
12:26
maybe you get some momentum, eventually.
12:31
>> It's certainly not an ideal, but it's a
situation I definitely have seen multiple
12:33
times where you just don't have
the staff to dedicate to it.
12:37
So you have to just
slowly eke your way out.
12:39
And you focus on your worst case,
your biggest time consuming tests and
12:42
automate those out.
12:48
Then with the time that saves you,
12:49
hopefully you can write some
more tests that saves more time.
12:51
And slowly and slowly, you chip away at
12:55
this massive block of testing until
you have something that's manageable.
12:57
>> So it sounds like, regardless of where
you're at in sort of the lifetime of,
13:00
even if you're just starting a new
application or a new project or
13:04
even if it's a legacy application
that's been being maintained for
13:08
a long period of time, that automation
can help in all those cases.
13:11
>> Yeah, for sure,
I think there's a lot of companies
13:16
start off with a manual test group and
they do a lot of manual testing.
13:18
And it's quick to get going with that,
right, you can get some quick results.
13:22
But it's also kind of hard
to shift that momentum, once
13:26
you have that established there might be
a bias against leaving a manual testing.
13:29
Like Nathan was saying, it's hard to carve
out the time to dedicate to automated
13:34
testing, and having the skill set
on your team to write that code.
13:38
But I think he is absolutely right that
you just have to start chipping away at
13:42
some of your mission critical pieces
of functionality that have to work.
13:46
And really focusing on those things that
are super time consuming, or prone to
13:49
error to free up your manual testers to
work on other areas of the application.
13:54
And use the automation as an accelerator
for your testing efforts.
13:58
>> For sure, and on that I've never met
a manual tester who doesn't want to do
14:02
automation testing.
14:06
So it's like always a benefit of you're
empowering people learn how to code and
14:07
they're very excited usually about
learning how to do it as long as it's,
14:11
I think you can chip away at that.
14:15
>> So to be clear, when we're
talking about this automation piece,
14:18
we're gonna leverage some sort of tool.
14:21
And there's probably a variety of
tools that you can use to do this.
14:22
But you're actually talking about,
for instance,
14:26
if it's a web-based application, you're
automating browsing from page to page or
14:29
from screen to screen,
clicking buttons, filling out fields.
14:34
Is that what we're talking about?
14:37
>> Yeah, definitely.
14:38
>> So to set up those test cases,
14:40
typically you're writing code to do that,
is that true?
14:42
>> That's true.
>> Are there some tools that,
14:45
I feel like I've have heard of tools.
14:47
I don't know if that's Cucumber or
14:50
what I'm thinking of,
that are more natural language?
14:51
>> So yeah, Cucumber is what they
call a behavioral driven design tool,
14:54
where you literally write out
just natural human sentences.
14:58
And basically under the hood those
sentences are interpreted into
15:02
automated steps with the tool.
15:07
But Cucumber particularly I really
enjoyed because you could get your
15:09
people who knew business of your product,
15:13
like pure business analysts
who didn't understand code.
15:16
You could get them to write things out
in a sort of verbose way that could
15:20
easily transfer into Cucumber steps
which a test engineer could then take.
15:25
>> Yeah.
>> And automate the whole system.
15:31
But the big benefit is that it
leaves the tests in a state that
15:33
any person can just read them and
exactly understand what everything does.
15:37
You don't have to parse code.
15:41
It's literally just reading sentences.
15:43
When this, then that,
Given X, when Y, then Z.
15:45
>> Right.
>> So it's a structured language, but
15:52
it's natural and broad enough that
people don't feel too confined, and
15:53
they're not looking at curly braces and
semicolons for instance,
15:57
and getting kind of hung
up on what that looks like.
16:00
>> Yeah.
>> I've seen that used for
16:03
a great place for business logic,
where you're like,
16:04
what happens when you're logged in and
you can't.
16:06
Are you supposed to be able to click that?
16:08
And you're like, well, let's read
the test together as a product team.
16:09
It's pretty neat to look-
>> The testing documentation is a really
16:14
powerful tool.
16:16
>> Yeah.
>> Interesting, so
16:17
is that a common way of doing this, or
is that still a little bit more fringey,
16:18
in terms of how to go about-
>> In five different places I've worked,
16:24
we only did it once.
16:27
>> Okay.
16:28
[LAUGH]
>> And I loved it, but
16:29
that was also the only place where I
was able to come in at the start and
16:31
was heavily involved in decision process.
16:34
Everywhere else I've worked,
I come in late, and
16:37
I have to work with what already exists.
16:40
>> Right, so you might end up with
more of a hybrid approach then.
16:42
So if you have a blended
team of some developers and
16:45
some business experts, stakeholders,
16:48
demand experts, whatever the term that
we're using there, collaborating.
16:52
So you still might have some manual
process where the people who
16:56
aren't able to write code
are responsible for that.
16:59
Maybe they're also working shoulder to
shoulder with the developers to write or
17:01
do the programming to write
the automated portions.
17:06
Is that kind of a fair way to
describe how that can work?
17:09
It certainly can work that way.
17:13
I mean,
there's no really one way to do this,
17:14
with given your staff, what you have.
17:16
If you don't have
technical people except for
17:18
your developers, then basically
developers have to write the main task.
17:21
Otherwise you're not gonna
have automated tasks.
17:24
>> Yeah, I would say that in general
from a QA perspective, you wanna be one,
17:27
transparent with all your testing and
testing tools.
17:31
But two,
17:34
have a system in place that everyone
can participate in the testing effort.
17:34
So it's not just responsibility if
you have a dedicated QA staff to
17:39
do all the testing,
you wanna bring in your development staff,
17:42
your product managers, if you have them.
17:45
Customer support is a great resource for
17:47
helping to test because they know
where all the problem areas are.
17:51
And so, but I agree with Nathan that
you use the right tool for the job.
17:54
So you don't have to be locked into one
tool and try to apply that everywhere.
17:59
It's gonna be a combination of tools,
18:03
depending on what it is
you are trying to test.
18:04
>> Yeah, no golden hammer, so to speak.
18:06
>> Yeah, right.
18:08
>> One of the gateway tools
that I've seen work is there's,
18:09
I think it's Selenium that produces it.
18:12
It's a little browser plug-in and
you click it.
18:14
It just kind of records your clicks as you
go, and then it generates a little, so
18:16
like this is what you did and
code below and it's a thing that you
18:20
can take a look and go, I can see how
we could put this across the side.
18:24
So that's a great sales tool.
18:26
I've seen that convert teams from maybe we
shouldn't do manual testing if the manual
18:27
tests can produce these sets of scripts.
18:32
I mean, it just kind of works across.
18:34
Right, having a recorder I
can imagine it would be huge,
18:36
cuz I can totally see how you're
gonna say, hey, I'm just going to
18:38
install this thing and have you go through
what you would normally do manually.
18:41
But we're also gonna capture some of that,
and
18:45
I'm imagine there's some finessing
maybe you need to do here and there.
18:48
>> Sure, yeah, if you would record the
script that's gonna use the same value you
18:52
typed in every time, so
you'll wanna randomize that.
18:55
It definitely is a great
jumpstart to get the script going.
18:57
>> In general, I found it's fine for
a starting point,
19:00
but that is not gonna be your goal.
19:03
These are core and playback scripts that
have the weakness that if there's any sort
19:05
of logical changes in your code, they
quickly become more work to maintain than
19:09
it would be if you had a more
abstract implementation
19:14
through something like Selenium or
Cucumber.
19:19
There's all kinds of testing frameworks
for just about every language and
19:24
every framework out there.
19:27
>> Okay, we've kind of been, or at least
in my mind anyway during this discussion,
19:29
I keep thinking web applications
because that's what I have the most
19:33
experience with.
19:36
But I would assume that even if you are
doing like mobile development whether that
19:37
be native or react native or something
else, or even desktop development,
19:41
for instance, that there's probably tools
that will assist with automating that QA
19:45
testing as well?
19:49
>> Yeah, absolutely, I mean, there's
emulators on the mobile side that helps
19:50
you, and
you can apply automation tools to those.
19:54
On the desktop side, it's been a long
time since I worked in that area,
19:57
but there are tools like Silk and
whatnot that Basically,
19:59
much like Selenium allows you to interact
with the UI and drive the application,
20:03
as if a user was sitting right there
at the keyboard interacting with you.
20:07
>> That makes total sense, okay.
20:09
So help me out here.
20:11
You're investing in writing code, so
it's kinda another code base that's
20:14
to the side of the code base in your
application that you're working on.
20:18
The application's not gonna be static.
20:22
It's gonna continue to grow and
change and evolve over time.
20:25
What are we talking about here?
20:27
Are we taking on, is that overhead gonna
end up killing us in the long run?
20:29
And is there ways to help manage that?
20:34
>> So first, the overhead testing,
20:37
you're talking something that we're
using to minimize the cost of testing.
20:40
So as long as you're focused on quality,
automation
20:45
done correctly is only going to minimize
the amount of work you have to do.
20:50
You will have to do some work
to maintain your testing suite.
20:54
But now, moving forward, you're running
these tests every time you add a feature.
20:57
So these tests, if they're gonna break,
tou're gonna see what they're gonna break,
21:02
you're gonna fix those tests,
and you're gonna move forward.
21:05
So it shouldn't ever really get stale and
21:08
out of touch with your code base,
it should evolve with it.
21:09
>> Right, so
there's going to be maintenance.
21:13
But I guess it's fair to say that even if
you're doing manual testing, those manual
21:15
testers would have to update written use
test cases and I relearn the application.
21:19
So it's kinda just repurposing that
effort into maintaining code and to your
21:24
point there is probably a large enough
net gain savings from automating this,
21:29
that It's hard to exceed that in terms of
maintenance, and from a code perspective.
21:34
>> Yeah, totally, it's on the proactive,
you're gonna pay for
21:40
it on the reactive side of it, right?
21:43
[LAUGH]
>> Right?
21:44
>> Like if these bugs are showing up,
you're gonna be paying for
21:45
it later anyway.
21:47
Development trying to go and
catch that and fix that.
21:47
And then fixing that, like,
did I break something else over here?
21:49
>> Right.
>> But I've see these test runs go
21:52
where you like, I just change that one
field and then you break 30 tests,
21:54
this form was used all over the places,
and this one little bit.
21:58
I mean that just,
when you see that happen, you're like wow,
22:01
this is great that did that, nobody
tested, nobody either sit there and test
22:04
those or think about all these difference
scenarios that it's on the code there.
22:08
>> Yeah.
22:12
>> You got to the thing
there is you got to focus on
22:13
you got to put effort the some
amount of effort into your testing
22:17
that you do in your development
because it's important.
22:21
It's important that your code is working.
22:23
>> Right, that makes sense.
22:25
>> I think you have to go into it knowing
that a little bit of planning goes
22:26
a long way.
22:29
So a lot of times in automation systems,
people will create frameworks.
22:29
And so, you can have common functionality.
22:34
That everyone can use in their scripts so,
you don't want everyone at
22:36
the beginning of their script to write
the code to log in to the system, right.
22:39
So you create some common code
that everybody can utilize and
22:43
those things go a long way.
22:46
And then again, when that changes
in the actual application,
22:47
hopefully there's just one place that you
can change it, and so it minimizes that.
22:50
So, a little bit of
planning goes a long way.
22:54
>> That makes a lot of sense.
22:55
So imagine I'm managing a deaf team.
22:57
Maybe I'm managing the product manager or
I'm some decisions or
22:59
some control over how we move ahead here.
23:03
We've been doing what we've been
doing with manual testing for
23:06
a long time and
you've come to me and said,
23:09
hey we've got to automate some of this,
but maybe I'm not buying it.
23:11
Maybe I'm interested, but
I'm like I'm unsure if this
23:14
really gonna have the return on
the investment that we need to make.
23:18
Do you have any other
stories that you can share,
23:21
where you've seen that sort of,
you're at a decision point here.
23:23
And I like to hear a success story.
23:27
>> Sure, yeah, I've got a good story.
23:29
I think a lot of times,
if you're facing that type of question,
23:31
all you have to do is look towards
real life and what happens.
23:34
And one of my previous companies,
we had a large ERP system,
23:37
and it was a system where multiple users
can be in there at one time, so like 40,
23:43
50, 60 people at one time.
23:47
And a bug was reported that occasionally
a race condition would occur, so
23:49
there'd be some type of contention over
a particular record in the database or
23:53
what have you It was very hard to
reproduce, but it was very impactful,
23:56
that when it occurred it crashed
the system and everyone got knocked out.
24:00
So a very important bug.
24:04
We tried to reproduce this manually,
meaning getting our dev and QA staff and
24:07
customer support staff,
24:11
so very expensive, it took a long
time to get set up to begin with.
24:12
Trying to reproduce it.
24:15
Once we reproduced it,
it was wait a minute.
24:16
We didn't have enough debugging
going on in the system.
24:19
We didn't have enough logging to really
understand what the problem was.
24:22
So we gotta do this all again, right?
24:24
So hours and hours and
hours of investment.
24:26
Fortunately, we had the foresight
of on the side having
24:28
engineers writing some automated
tests to do this exact same thing and
24:31
eventually we were able to reproduce it
more frequently and with less effort.
24:35
And so it started paying for
itself right there.
24:39
But then,
beyond that once the fix was in place,
24:42
we could those same scripts and
let them run 24/7 for an entire week and
24:46
make sure that that risk
condition did not occur again.
24:52
And then, a third point is that that could
then be added to our test suite, and
24:56
run every time we do that release.
25:00
>> Right.
25:02
>> So something that we didn't have
in place that took some time to do,
25:02
really got rid of the need to have this
massive manual testing effort that
25:07
just wasn't sustainable.
25:12
There's no way you could do that on a
regular basis and you certainly went more
25:13
running for 24/7 for a week to
make sure the bug was really gone.
25:17
So, automation and just solve these type
of volume and those are issues that you
25:20
are trying to deal with when you are
talking about people and only been working
25:25
for eighteen hours a day, [LAUGH]
>> Yeah, having done a lot of manual QA
25:30
test, specially first involve the domain
experts it would really nice to know for
25:34
a fact that when you run on your automated
QA test that they actually run, what
25:40
would be the results instead of having
fingers cross that everyone is doing.
25:45
In lieu of like having signed
pieces of paper that said I read,
25:51
I did this thing, right?
25:55
>> Right.
>> And
25:57
that would be really great as from
a developer's perspective just having that
25:57
sort of, like you know that it ran.
26:02
You know whether it passed or failed.
26:03
>> That's some really interesting
tools that exist now.
26:05
Like I know we use coveralls
where I work right now.
26:07
And it actually is just a website that our
26:10
test coverage goes to like our test
run and it generates a report.
26:12
That report gets hosted on coveralls so
26:17
we can go back at any moment of
any day of any build really.
26:19
We can see what was test coverage,
what test ran, what was the results.
26:22
It's continuing to evolve, but
it's amazing what convenience
26:27
is being built into the testing
frame work these days.
26:31
>> Yeah, has anyone had the experience
of implementing automated QA testing and
26:35
actually can tie that or
associate that to happier users,
26:40
like users feeling like things are more
stable or that the product is better?
26:44
Is that something, you been able to
see that association so strongly?
26:48
>> I think it may be
a qualitative association.
26:53
I think when your QA staff is sleeping
well because they know that the test
26:56
are all passing, [LAUGH]
>> [LAUGH]
27:00
>> I think that alone is worth the money.
27:01
But certainly, like Nathan was saying,
when you have the transparency into
27:05
the test results, and you can rerun
these tests whenever you need to.
27:09
And even things where you're at the end
of a release cycle, and all of a sudden
27:12
you've got to fix a bug, you have time
to rerun all your tests manually.
27:16
Or maybe a new operating
system version comes out or
27:19
new browser version comes out, and
you've got to rerun these tests.
27:22
Well, nobody really plans for
those sorts of things, and so
27:25
you don't have time for it.
27:28
And so, there's a lot of peace of mind
knowing that a new browser just came out,
27:28
we can do a full regression
of our application and
27:33
have confidence that it works.
27:35
Or if there is a bug, we're gonna be the
first ones to find it another end users.
27:37
>> That is great.
27:41
>> Yeah.
27:42
>> I saw right there, you hit on something
there about I think that it also allows
27:43
for better release cycles, right?
27:46
So if you're asking specifically about
what your users are expecting, yeah,
27:48
you can release more quickly because
you know that everything's there.
27:51
You don't have the QA period
they're of traditional waterfall,
27:54
like you had said, where you can
release a new feature on Wednesday and
27:59
Thursday and Friday,
because the tests are working.
28:04
You get in the habit of writing
those tests alongside of it and
28:07
you know that that's going and you can
show that the business logic is working to
28:10
everybody who's concerned about
that feature [CROSSTALK].
28:13
>> So that's interesting.
28:16
So you have an issue that says, at this
feature and part of completing that
28:18
feature is, you have the automated
QA test that goes along with it,
28:21
along with your unit test from
a development perspective, and so on.
28:25
>> Right.
28:29
>> Yeah, that's great.
28:30
So for anyone who's now convinced,
28:32
I have to automate my QA testing,
any advice on how to get started?
28:35
>> Well, I guess one of the things I would
just reiterate or emphasize is again,
28:40
if you have a QA team it's not
just the QA teams responsibility.
28:45
Bring everybody into the effort,
development certainly, product management,
28:49
customer support.
28:54
Develop a system or
28:55
a methodology where everybody can
help add tests to the system.
28:56
Really try to make your tests repeatable,
get the benefit out of them.
29:00
Make them so that they're tied into
your continuous integration system and
29:03
really have them work for you.
29:09
They're not things that you just
run at the end of the release.
29:10
They're being run every day.
29:12
Really take the advantage of
the machine power that you have.
29:13
>> Nate?
29:16
>> Yeah, I think the main thing is to
just look at your ROI on it, is just
29:17
find what you biggest pinpoints are and
that's where you start and you just move
29:22
down the list of worst case offenders for
what takes your time to validate.
29:27
>> Okay, Craig?
29:31
>> I think it's important to
respect the tests as code.
29:33
The same way that you treat
your other code in the system.
29:37
Show that same level of respect.
29:39
And that-
>> Reviews.
29:41
>> Yeah, yeah, totally,
and best practices, right.
29:42
Hey, this is duplicated,
why are we duplicating?
29:45
Let's figure out a way to make this,
let's make good fixtures,
29:48
let's make it fun to write tests.
29:50
That's something that happens too, I see
that, people, sometimes you'll get into
29:52
it's kind of a mess to write this test and
I don't make it more fun.
29:57
Take in the software the same
way that you would in and
30:00
develop a component,
same thing with the test there.
30:04
>> Excellent, well thanks for
joining the discussion today.
30:08
Great discussion and thanks for
30:11
sharing your experiences about QA
testing and QA testing automation.
30:13
>> Great.
>> Thank you.
30:16
>> And thanks for watching And
be sure to check the notes for this video,
30:18
we'll have resources listed there for,
QA testing,
30:22
to learn more about QA testing or to learn
more about automation of those QA tests.
30:25
Also, be sure to rate the video and
let us know how we're doing.
30:30
See you next time.
30:33
[SOUND]
30:34
You need to sign up for Treehouse in order to download course files.
Sign upYou need to sign up for Treehouse in order to set up Workspace
Sign up