Running jasmine on appveyor

aqilzeeshan's Avatar


07 Jul, 2014 01:54 PM

We are running jasmine javascript tests after build using phantomjs. Tests are running but to get jasmine test output to Appveyor some plugin is required so that plugin reports jasmine results in a manner Appveyor can understand so the question is 'Do appveyor has any plugin to display results from jasmine tests ?'

  1. Support Staff 1 Posted by Feodor Fitsner on 07 Jul, 2014 02:03 PM

    Feodor Fitsner's Avatar

    There is REST API for sending test results to AppVeyor: and currently xUnit, nUnit, MSpec and MSTest are integrated.

    It's either phantomjs should be extended or XML results are parsed (if it's able generating it).

  2. 2 Posted by Joao Inacio on 17 Jul, 2014 11:25 AM

    Joao Inacio's Avatar

    Hello Feodor,

    Regarding this, we had some issues using the REST API - we were running our tests on post build, and hitting the API to report all our tests results. However, only a 1/3 of the tests were showing up on AppVeyor because it seems as soon as the post build script finishes, AppVeyor moves on to Testing stage and stops processing the API requests. I had to add in a generous sleeping period to make sure all the requests were processed, to stop it moving on to the Test stage.

    Is there a better way to do this?

  3. Support Staff 3 Posted by Feodor Fitsner on 17 Jul, 2014 11:41 AM

    Feodor Fitsner's Avatar

    Hi Joao,

    Do you make synchronous calls to REST API or async?

  4. 4 Posted by Joao Inacio on 17 Jul, 2014 12:41 PM

    Joao Inacio's Avatar

    They are async.
    Also, is there a way we can do a call for multiple tests results (to batch them together) instead of one per each?

  5. Support Staff 5 Posted by Feodor Fitsner on 17 Jul, 2014 12:43 PM

    Feodor Fitsner's Avatar

    Not right now, but we could implement that (don't think it's hard). Would supporting NUnit and xUnit formats be sufficient?

  6. 6 Posted by Joao Inacio on 17 Jul, 2014 01:02 PM

    Joao Inacio's Avatar

    ATM we're using simple REST API posts, with json as the body:

                        type: 'POST',
                        url: this.reportBaseUrl + 'api/tests',
                        data: {
                            "testName": testName,
                            "testFramework": "jasmine",
                            "fileName": "jasmine",
                            "outcome": passedOutcome(passed),
                            "ErrorMessage": "",
                            "ErrorStackTrace": stackTrace,
                            "StdOut": "",
                            "StdErr": ""
    Just wondering if it could me more efficient if we bundled the calls in a few of them, instead of 1 per each test.

    Do you reckon I should execute the ajax calls in sync mode? Won't that slow down the whole process for a considerable time?

  7. Support Staff 7 Posted by Feodor Fitsner on 17 Jul, 2014 01:18 PM

    Feodor Fitsner's Avatar

    It's slowing down any way: either running them in sync or adding a sleep like you did.

    Batches could work though my concern is batch size. There is a maximum message size limitation that could be sent from build worker to AppVeyor. I mean you should be aware that sending a batch could fail if contains a lot of data.

    Another way is uploading combined test results report to some URL in XML or JSON format. Very large data set could be uploaded though no real-time log update.

    What do you think?

  8. 8 Posted by Joao Inacio on 17 Jul, 2014 01:34 PM

    Joao Inacio's Avatar

    The more flexibility the better as far as I'm concerned - even if you don't get the real time update, as long as you have them accounted for in the end and know if they all passed / what failed, I'm all for it.

    I will give the sync calls a go for now and see if it improves the total run time of the jasmine tests, cheers.

  9. Support Staff 9 Posted by Feodor Fitsner on 17 Jul, 2014 01:37 PM

    Feodor Fitsner's Avatar

    OK, we will implement batches first, then uploading XMLs.

  10. Support Staff 10 Posted by Feodor Fitsner on 27 Jul, 2014 02:35 PM

    Feodor Fitsner's Avatar

    Great news! We've just deployed an update with batch updates and XML support!

    How to import XML test results during the build:

    Build Worker API for batch test updates:
    Add tests and Update tests topics updated with new endpoints.

    Let me know how that works for you!

  11. 11 Posted by Dan Jones on 06 Aug, 2014 04:32 PM

    Dan Jones's Avatar

    Hi Feodor,

    Following up on this but slightly unrelated to the original question from my colleagues
     - we are able to detect if our phantom test runner captured any failed tests.
    I would then like to fail the build but it continues to the next part of the build step (.Net Unit Tests) and if that goes ok we get a successful build.
    Currently I am trying to fail with something like this:


    this may be more of a question about phantomjs so apologies. But any guidance would be appreciated. Is there something I can report using the REST API that would call this a failed build?

    Thanks again, Daniel

  12. 12 Posted by Joao Inacio on 06 Aug, 2014 04:32 PM

    Joao Inacio's Avatar

    I'll be out of the office until the 11th of August. Any urgent queries please e-mail [email blocked].

    Thank you,
    Joao Inacio

  13. Support Staff 13 Posted by Feodor Fitsner on 06 Aug, 2014 11:47 PM

    Feodor Fitsner's Avatar

    Hi Daniel,

    Exit code is the best soltuion in the most cases. That's strange that it's not passing it. Are you calling Phantom runner as ps: or cmd:? It definitely should work with cmd:.

    Alternatively, in PS mode you can use $host.SetShouldExit(1)

  14. 14 Posted by Dan Jones on 07 Aug, 2014 01:43 PM

    Dan Jones's Avatar

    Hi Feodor,

    The solution to this was quite simple, and as you said the exit code should have been respected...I changed this

      if (failures > 0)

    to this

    if (failures > 0) {
                else {

    thinking that the first exit would stop code execution but the second exit with 0 error code still ran.

  15. Support Staff 15 Posted by Feodor Fitsner on 07 Aug, 2014 01:48 PM

    Feodor Fitsner's Avatar

    Cool, thanks for the update!

  16. Support Staff 16 Posted by Feodor Fitsner on 07 Aug, 2014 02:17 PM

    Feodor Fitsner's Avatar

    Would be cool if you could share your experience for running jasmine tests on AppVeyor with community ;) We could an article in our docs then.

  17. 17 Posted by Dan Jones on 07 Aug, 2014 02:22 PM

    Dan Jones's Avatar

    Of course - I'll put something together and post back later.

  18. 18 Posted by johnny_reilly on 03 Sep, 2014 10:41 AM

    johnny_reilly's Avatar

    I'd be very interested in how this turns out. I've been using to run my Jasmine tests inside Visual Studio and in TFS / Visual Studio Online. (There's a slightly rambly blog post I wrote on this here: )

    Given AppVeyor's .NET emphasis would Chutzpah be a good way to get Jasmine / Mocha / QUnit etc JavaScript tests running as part of the build? Perhaps someone has mentioned this already? I'd love to know!

    FWIW there are details about the Chutzpah command line runner here and here and it supports the following JavaScript testing frameworks:
    - QUnit - Jasmine - Mocha

    In case it helps, here's an example of a solution with a separate Jasmine JavaScript test project included (using Chutzpah):

    The Jasmine tests can be found here (dependant upon the chutzpah.json in the root):


    You can test this out inside Visual Studio 2012 / 2013 using the Chutzpah extension / add-in

    Since this has become quite a long comment on a slightly different area from the original topic I've broken this into a separate discussion.

  19. 19 Posted by Dan Jones on 03 Sep, 2014 10:51 AM

    Dan Jones's Avatar

    Apologies for the delay - I do have a rather sketchy series of notes on what we did. I will tidy up and post back...

  20. 20 Posted by johnny_reilly on 03 Sep, 2014 11:33 AM

    johnny_reilly's Avatar

    Thanks Dan. I'm an AppVeyor noob so any guidance on how to make the 2 play nice is appreciated!

  21. 21 Posted by Dan Jones on 03 Sep, 2014 02:31 PM

    Dan Jones's Avatar

    It's still a bit sketchy, but basically:

    The general need is this, I'm sure it's familiar...

    We want to run builds on each pull request and have a Github hook to send a pay load when a pull request is synchronised
    This kicks off a build of the pull request branch.
    We want to get an early indication of the status of that code. There are several indicators some from the build server, some manual checks recorded in GitHub and other systems
    Code Review status + Code builds ok + Unit Tests not broken + Jasmine tests not broken ( + Browser Tests + Manual BAT + Manual QA process etc)
    We also have a process to build a selection of pull requests and deploy to staging server to get an indication of the status of all the code together (where BAT and QA take place)

    The issue was that we needed Jasmine tests firstly reporting into the Tests Build output.
    Then we require that a test failure actually failed the build as .Net Unit tests did rather than just carrying on and reporting a successful build in the pull request

    Basic Solution (amended code - assumes knowledge of running jasmine reporters via javascript)

    General flow - start phantom via build step cmd pointing to testRunner.html and testRunner.js
    testRunner.html - includes script references
    jasmine appVeyor Reporter script which hooks into jasmine events when we get a test result and logs consle messages
    testRunner.js - kicks things off and deals with total results. Fails build if necessary, function (status) {
       //Page is loaded!
       if (status !== 'success') {
          console.log('Unable to load the address!');
       } else {
           // Run our code here
           console.log('Initializing tests from phantom...');
           page.evaluate(function (appVeyorUrl) {
              //Custom reporter script which hooks into necessary events and reports on status
               var tcReporter = new jasmine.AppVeyorReporter(appVeyorUrl);
               var oldCallback = tcReporter.reportRunnerResults;
               tcReporter.reportRunnerResults = function (runner) {
                   oldCallback.apply(this, arguments);
                   this.log('##jasmine.complete'); //NOTE - this is used in testRunner.js
               jasmine.getEnv().addReporter(new jasmine.TrivialReporter());
               //kicks it all off - all describes must have been run by now..
               //TODO have some form of event so we know all tests have been registered
               //for now, just use a good old timeout
               setTimeout(function () {
               }, 3000);
           }, reportBaseUrl);
          //Using a delay to make sure the JavaScript is executed in the browser
          window.setTimeout(function () {
             //if the test runner never got started for some reason quite after 20 seconds
             if (!anyLogs) {
                //fail the build
                console.log("status - phantom test runner did not report any results after 20 secs");
          }, 20000);

    We log jasmine messages with a custom script which essentially just logs to the console when it gets a result back from the jasmine test.
    We log ##jasmine.failed, ##jasmine.passed, ##jasmine.complete and the onConsoleMessage that phantom calls into processes the current status.
    We also report this to the tests build output in appveyor via:

     //Inside our jasmine.appVeyorReporter script when we get a test result
                        type: 'POST',
                        url: this.reportBaseUrl + 'build/compilationmessages',
                        data: {
                            "message": passed ? "Passed " + testName : "Failed " + testName,
                            "category": passed ? "information" : "error",
                            "details": passed ? "" : stackTrace,
                            "fileName": testName,
                            "line": "",
                            "column": "",
                            "projectName": "jasmine",
                            "projectFileName": ""
                        // Feodor suggestion to avoid waiting for requests:
                        async: false

    We hook into the phantom page.OnConsoleMessage function to process each message – counting failures and successes or indicating whether we have finished.
    If we have finished we check if there are any failures and exit phantom with the appropriate exit code (1 for a failed build otherwise 0).
    AppVeyor cmd build step respects this and fails our build if necessary

    page.onConsoleMessage = function (msg) {
        if (msg) {
            anyLogs = true;
            if (msg.indexOf('##jasmine.passed') !== -1) {
            if (msg.indexOf("##jasmine.failed") !== -1) {
            else if (!onlyReportFailures) {
            if (msg.indexOf("##jasmine.complete") !== -1) {
                failures > 0 ? console.log('!! Finished with ' + failures + ' failures out of ' + testCount + ' tests !!') : console.log('Win! Completed ' + testCount + ' tests with no failures.');
                if (failures > 0) {
                    var failBuildMessage = "There are failing jasmine tests. I'm gonna have to fail this build. Fix the tests and come back later.";
                else {

    And our appVeyor build step cmd script:

    cd [buildFolder]\[pathToTestRunnerPage.html]
    [buildFolder]\[pathToPhatom]\phantomjs.exe.1.8.0\phantomjs.exe testrunner.appveyor.js testrunner.html false %APPVEYOR_API_URL%

    That's it. It's not a complete coded solution - but hopefully you get the idea

  22. 22 Posted by johnny_reilly on 03 Sep, 2014 07:29 PM

    johnny_reilly's Avatar

    Hi Dan,

    Thanks for the very fleshed out answer - that's really helpful! I'm not at a computer to try this out at present but I mean to. I've a couple of questions if you'd be so good:

    1. You're invoking Phantom to run your Jasmine tests - I'm guessing the phantomjs.exe is part of your repo? I ask as Chutzpah similarly includes phantom and my ideal scenario would being able to use Chutzpah both in visual studio and in appveyor so I don't have to have duplicate config etc. If you're invoking your tests by running phantom in your repo it gives me hope I might be able to do similar with Chutzpah. And if that fails then trying your approach is clearly a good option.

    2. Your script samples certainly give the gist of how things fit together. Could be a little clearer on the contents of testrunner.js and the jasmine appVeyor Reporter script please? I'm not sure which script samples go into which files? Or if the jasmine appVeyor Reporter script is something else entirely? If you'd be able to share these files in their entirety that would be wonderful. Your call of course.

    Thanks for all the help so far.


  23. 23 Posted by Dan Jones on 04 Sep, 2014 11:42 AM

    Dan Jones's Avatar

    Sure, To clarify -

    we consume phantomjs.exe as a nuget package. We exclude from the github repository but it is restored on build and then referenced as a build step in that path so path_to_installed_packages\phatnomjs\phantomjs.exe

    The scripts:
    testRunner.js simply defines the phantomjs objects/functions:


    var args = require("system").args;
    //validate the correct args are passed in

    page.settings.localToRemoteUrlAccessEnabled = true; //prevents cross origin issues when loading local resources ie tmpls
    //This is required because PhantomJS sandboxes the website and it does not show up the console messages form that page by default

    page = new WebPage()
    page.onConsoleMessage = function (msg) { ]
        //process reporter result
        //keep a count of failures, successes, total tests etc
        //if we have finished all the specs log one more console message with the
        //results and phantom.exit(0) or phantom.exit(1) as necessary
    }, function (status) {
     //setup reporter and kick off test execution
    var tcReporter = new jasmine.AppVeyorReporter(appVeyorUrl);
     tcReporter.reportRunnerResults = function (msg){
                   this.log('##jasmine.complete'); //tell page.onConsoleMessage that we have

    The custom reporter is setup and certain functions will be called into when the tests are executing most importantly for us:

    reportSuiteResults: function (suite) {
       var results = suite.results();

      for (var i = 0, ilen = results.items_.length; i < ilen; i++) {
         var spec = results.items[i]
        var passed = true;
         for (var j = 0, jlen = spec.items_.length; j < jlen; j++) {
              var result = spec.items_[j];
              passed = passed && result.passed_;
          if (!passed ) {
               //then log to appveyor api test messages

                        type: 'POST',
                        url: this.reportBaseUrl + 'build/compilationmessages',
                        data: {
                        async: false
                 //this will cause the page.onConsoleMessage above to be called
                 jasmine.getGlobal().console.log("##jasmine.failed " + spec.description)

    Hope this makes sense!

  24. 24 Posted by Dan Jones on 04 Sep, 2014 11:44 AM

    Dan Jones's Avatar

    and to clarify further testRunner.html must reference the testReporter script and of course all other scripts required during the actual testing

  25. 25 Posted by johnny_reilly on 04 Sep, 2014 12:44 PM

    johnny_reilly's Avatar

    Thanks Dan - it pretty much does. I'm going to try the Chutzpah route first and having done a little digging I think it might work quite well. If it doesn't come off then I'm going to give your approach a try.

    Actually your approach has an advantage over Chutzpah in that you can get test by test results whereas Chutzpah will only allow batch results.

    There is a Chutzpah NuGet package available so I might follow your lead and include the Chutzpah NuGet package as you've included the Phantom one.

  26. 26 Posted by johnny_reilly on 04 Sep, 2014 01:06 PM

    johnny_reilly's Avatar

    Here's a question; are you setting this up as a Before tests script, After tests script, Before build script or After build script?

  27. 27 Posted by Dan Jones on 04 Sep, 2014 01:52 PM

    Dan Jones's Avatar

    After Build.
    So our steps are currently

    Before build script : nuget restore
    After Build Script : jasmine tests + Unit Tests

    The tests section of our project settings is actually off - we do it all in the after build scripts.

  28. 28 Posted by johnny_reilly on 04 Sep, 2014 02:04 PM

    johnny_reilly's Avatar

    Nice - I'll give that a try when I get to a machine. Time to dust down my rusty powershell skills...

  29. 29 Posted by johnny_reilly on 06 Sep, 2014 11:31 AM

    johnny_reilly's Avatar

    Hi Dan,

    Just wanted to let you know that following your help I've been making some progress with Chutzpah and Jasmine. You can see how far I've got here - Thanks for all your help so far!

    I've also blogged about it:


Reply to this discussion

Internal reply

Formatting help / Preview (switch to plain text) No formatting (switch to Markdown)

Attaching KB article:


Attached Files

You can attach files up to 10MB

If you don't have an account yet, we need to confirm you're human and not a machine trying to post spam.

Keyboard shortcuts


? Show this help
ESC Blurs the current field

Comment Form

r Focus the comment reply box
^ + ↩ Submit the comment

You can use Command ⌘ instead of Control ^ on Mac