Meeshkan generated Puppeteer scripts with Jest and GitHub actions

MS
Makenna Smutz

COO & co-founder

Mar 23, 2021

Download the Puppeteer script screenshot

This blog is on the tail of some great news β€” Meeshkan now lets you export Puppeteer test case scripts! That changelog can be referred to here. This article is a great read if you're already a Meeshkan user and looking to implement the scripts in your workflow.

Some backstory#

Meeshkan as a "UI tests generated by your users and run automatically" β€” product was born in September of 2020. Before then we'd worked on all sorts of testing, mocking, and ambitious dev projects but they all culminated in this direction.

This is to say that downloading tests are a stepping stone to our product vision. It's an incredibly impactful step that fulfills the flow we've promised to Meeshkan users: record production user behavior, create test cases, and run them automatically against your staging branch.

Puppeteer and why we chose it#

Introducing new dependencies into someone's stack is a sacred thing. We're continuing to deliver on our value, by slotting into your workflow irrespective of the tooling you might be using. Meeshkan records browser events that can be transcribed into any browser scripting technology, and we had narrowed down our options to Selenium vs Puppeteer. Internally, we have a lot of experience with the Puppeteer API, and decided to go with that to better support our users and deliver a more fluid experience.

Now, let's implement!#

With the Puppeteer script, there are boundless possibilities. We'll walk through 3 levels of complexity and the benefits that they offer.

All of the following steps will require you to work with a downloaded script from a test case in Meeshkan. Visit an individual user story, and click the more button in the top right corner. Then click the button "Download Puppeteer script" from the drop down, and a file with the User Story's name and a .js extension should begin downloading.

Inside the codebase of the project you're working with, install puppeteer.

1. Out of the box Puppeteer#

With your JavaScript file downloaded, open it in the IDE of your choice (I'm using VS Code). This is what my file looks like (I've added a few comments for clarity):

This is a test that I'm setting up for the Meeshkan webapp to make sure that Meeshkan users can successfully add authentication tokens to their projects' settings page. The user story that triggered the generation of this Puppeteer script was created by Meeshkan users who performed this logic.

We're going to manually 'test' to see if this user story can run or, in other words, if users can perform this action. You'll want to be cd'd into the directory with the script and use node to invoke the Puppeteer script.

You should see a browser window pop open and be driven without any input needed. This can take a few seconds! There are two ways to know if the test is successful:

  1. Watching β€” Does the correct thing happen before your eyes?
  2. Terminal output β€” does the terminal complete without complaining or are there errors?

The type of errors that could pop up are those of basic assertions. These include:

  • A page doesn't exist
  • An element doesn't exist
  • An element does exist, but an action cannot be performed on it

For some, this is a bit too manual; let's see if we can add a bit of automation to this.

2. Puppeteer per commit with GitHub actions#

I really couldn't rave enough about GitHub actions and how smooth the workflow is (once you get past the unforgivingness of YAML syntax, that is πŸ˜‰).

Navigating to the "Actions" tab in GitHub, let's create a new workflow. This will open a draft editor with a folder/file structure of .github/workflows/main.yaml relative to the root of your repository.

Create a new GitHub action

The initial YAML file will have comments at almost every line explaining the make-up of a GitHub Action file.

  1. First, we'll change the name of the Test suite to "Meeshkan tests"
  2. Then change the on: or frequency of test runs to push
  3. Create our first job that sets up the node environment, installs dependencies, and runs the Puppeteer script.

Now, what we previously did locally will run per-commit via GitHub actions.

This is still light on the assertions though. In the next step, we'll look into defining more conditions for the test to succeed.

3. Writing assertions with Jest#

Jest is a super popular JavaScript testing framework that allows you to write tests simply. This is the option that Meeshkan internally chose. We already had Jest installed in our repo, so it was an easy extension for us.

To extend Jest with Puppeteer, we will use jest-puppeteer. I'd suggest following the setup details in this repository. The important bits I found:

  • Your jest.config.js should now define the preset as jest-puppeteer.
  • Make sure you don't have a testEnvironment set from a previous or default configuration; the package will take care of this.
  • If you're using TypeScript, check out this documentation as well.

Writing assertions are straightforward functions. We'll be describing a user story, which Jest will call a test suite, declaring its expectations at certain points. I will be breaking the suite up into logical steps via assertions. First, let's look at one step.

describe expects it statements and succeeds or fails depending on each assertion. Let's make sure that this works locally before moving on. The full file we're operating with is:

The frontend team at Meeshkan uses NX as a set of dev-tools to manage our mono-repo. It makes it super slick to run tests, deploy incrementally, share resources (called libs), etc.

jest

So our test command is not likely how you'd do it. Depending on your set up, you'll run something like:

This is where you would take the time to swap out your base URL (process.env.TEST_URL), and add cookies if your service/user story requires you to be registered.

Once you've verified the behavior locally, we'll implement this newly found functionality in our GitHub action!

Green checks on GitHub

Now to get the green checks on a GitHub PR, more specific conditions will need to be met. The file will now read:

4. Bonus β€” using await-vercel#

Since Meeshkan uses Vercel, we also implemented an interesting workaround to test our hosted staging environment instead of a local build. The GitHub action await-vercel allows us to define the URL and poll for it to be finished deploying.

We suggest a parallel setup for whichever host you use as some issues only come out of the woodwork when hosted.

First, we add a job to the beginning of our main.yml file. You'll then add your team's Vercel token as a GitHub secret to the repository you're testing and pass it as an environment variable.

If you were to only be testing against a non-closing branch, like staging, you can pass the deployment-url hardcoded as such:

If you work with feature branches, we'll need to set this dynamically! We'll add three new steps that:

  1. Get the branch name and then
  2. Transform that to the deployment URL which is stored as a step variable named branch and finally
  3. Display the status of the deployment URL.

Vercel settings

Vercel uses a deployment URL pattern of ${projectName}-git-${branchName}-${teamName}.vercel.app. You can find the project name and team name in the URL or settings of your project. Our project name in this screenshot is webapp and team name meeshkanml.

This is our await-vercel configuration so far:

Finally, you'll want to make sure that this step blocks the testing step that we configured above! You'll do this by passing needs: wait-for-vercel-deployment to the run-jest-puppeteer-tests step.

In conclusion#

Meeshkan outputting Puppeteer scripts is the first step in running tests generated by your users for a better release experience. How you implement these may vary from the above, so please get in touch if you have questions, or a cool new way of implementing it yourself that you'd like us to add!

Newer postOlder post

Don’t miss the next post!

Absolutely no spam. Unsubscribe anytime.

Company

About usCareersContactChangelog