mercredi 30 septembre 2015

Python unittest at command line, resolving dependencies

I have to use python 2.2.6 for a project. I built it using PyCharm on Windows. In PyCharm, I can execute my test file, and it runs correctly. merge_sort_test:

import unittest
from merge_sort.merge_sort import get_middle, merge, merge_sort

class MergeSortTest(unittest.TestCase):
    def testGetMiddle_5_shouldBe2(self):

...

def main():
    unittest.main()

if __name__ == '__main__':
    main()

The folder structure is:

merge_sort
|__ __init__.py
|__ merge_sort.py
|__ test
   |__ __init__.py
   |__ merge_sort_test.py

I need to be able to run tests from a command line (in Windows and Unix), as well as within PyCharm.

However, I am not sure how to execute the test from the command line with correctly resolved dependencies.

If I execute the test file, I get the message No module named merge_sort.merge_sort.

I have looked around StackOverflow for other ways to execute the test at the command line, without success. Some of them just don't work (at least with v2.2.6), but some of them I didn't understand. I am looking for an explanation of how to execute tests that can be understood by a Python novice.

Thanks in advance

Unit testing directive with AngularJS + Jasmine

I am a new to unit-testing with jasmine, so iI hope this makes sense and is correct enough to get an answer , I am trying to test an angularJS directive

here is my plunker : http://ift.tt/1iN3h3h

in my case i am unable to get the textbox(id="Montid") value

here is my angualr code

app.directive("monthNext", function () {

console.log('massif');
return {
    restrict: 'A',
    link: function (scope, element) {
        element.on('input', function () {
            var todaysYear = new Date();
            var u = todaysYear.getFullYear() - 2;

            if (element.val().length == 4)
            {

                var nextElement = element.next().next().next().next().next().next().next();
                nextElement = angular.element(document.querySelectorAll('#Montid'));

                if (element.val() <= u) {

                    console.log(element.children());
                    //var nextElement = angular.element(document.body).find('[tab index = 6]')
                    console.log(nextElement);
                    //nextElement.focus();

                    console.log(nextElement);
                    nextElement.val("");
                    nextElement[0].focus();
                }
                else
                {
                   // alert(nextElement.val());             
                    console.log(nextElement.val("01"));
                }
            }

        });
    }
};

});

here is my jasmine code

describe('CommonBusInfo', function () {
var element, scope, timeout;
beforeEach(function () {
    module('CommonBusInfo');

    inject(function ($rootScope, $compile) {
        scope = $rootScope.$new();
        element = angular.element('<form><input id="Montid" ng-model="test" value="09" type="text"/><input id="yearId" " type="text" value="2015" month-next/></form>');
        $compile(element)(scope);
        scope.$digest();
    });
});
it('should set Month value to 1', function () {
    var x = element.find('input');
        x.triggerHandler('input');
        scope.$digest();

}); }); i want to read Montid value to compare

Thank you, Chaitanya

Unit testing with [Laravel 5]

I would appreciate if someone can show me how can I test this method inside my controller:

class CommentController extends Controller {

    protected $update;

    function __construct(Comment $update) {
        $this->update = $update;
    }
    /**
     * Update the specified resource in storage.
     *
     * @param  int  $id
     * @return Response
     */
    public function update(UpdateCommentRequest $request) {
        if (Input::get('task') == 'updateComment') {
            if ($this->update->find(Input::get('id'))
                            ->update(['text' => $request->get('text')])) {
                return json_encode(array('success' => true));
            }
        }
    }

This is update route: /api/project/

Selenium C# - Run multiple tests from same test

So I have a suite of tests that all can be run individually and I have set an excel sheet, will move it to a db later though it shouldn't matter for this, that contains a list of tests with browsers and days that I want to run them. Each days test will be run on my server through one test category that contains the ability to call multiple different tests with different parameters.

This is all working, but I am looking for a better way to kick off each of these tests so that I still have the individual tests results the same way I get from either running them individually or in a build that contains a category of tests. This is also to ensure that the tests are running executing independently.

So the question is how can this be achieved.

How do i unit test Queryover with selectlist?

How Do I unit Test the below code using A.Fake<> ?

using (session)  
{  
    var result = session.QueryOver<AnnualInformation>()  
                 .JoinAlias(a => a.MonthlyInformation, () => monthlyAlias, JoinType.LeftOuterJoin)  
                 .JoinAlias(a => a.ShareValueInformation, () => shareAlias, JoinType.LeftOuterJoin)  
                 .JoinAlias(a => a.MiscDetails, () => miscAlias, JoinType.LeftOuterJoin)  
                 .SelectList(list => list  
                             .Select(x => x.Id)  
                             .Select(x => x.CreationDate)  
                             .Select(x => x.AnnualAmount)  
                             .Select(x => x.AnnualCurrency)  
                             .Select(() => monthlyAlias.MonthlyAmount)  
                             .Select(() => monthlyAlias.MonthlyCurrency)  
                             .Select(() => shareAlias.CurrentSharevalue)  
                             .Select(() => miscAlias.MarketValueAmount)  
                             ).Where(a => a.Id == 123456).List<object[]>();  
}

Any Ideas Folks with code snippet ?

How to test exception in void method? [duplicate]

This question already has an answer here:

For example I have the following method....

private Trace trace = new Trace();
private TracertImpl tracert = new TracertImpl(); 

public void msg(){

        message.add("Result : " +result);
        try {
            trace.add(tracert.BuildMessage(MessageType.New,
                    wayPoint, CastLevel.B , "Cast " + cast.id
                            + " reason: " + msg));
        } catch (Exception e) {
            log.error("CONSTR: "
                    + " Call to tracert.BuildMessage threw exception.", e);
        }
}

My team wants 100% coverage but the case that exception cant be reached unless tracert = null and I was thinking are there any other way to cover exception?

How can I test Sidekiq job failures?

A job in sidekiq, upon exception, will be put on the retry queue.

Because of that, and because the task is run asynchronously, MyWorker.perform_async(...) can never throw an exception generated in the task code.

In testing, however, an exception that occurs in the task does not cause the task to be put in the retry queue. The exception bubbles up out of perform_async.

So what happens in tests is something that cannot possible occur when running the code.

What, then, is the best way to test code that triggers jobs that can fail and be put on the retry queue?

Note that the following seems to have no effect in testing:
Sidekiq.default_worker_options = { :retry => true}

Reactive extension fixed Interval between async calls when call is longer than Interval length

Here is my Interval definition:

m_interval = Observable.Interval(TimeSpan.FromSeconds(5), m_schedulerProvider.EventLoop)
                .ObserveOn(m_schedulerProvider.EventLoop)
                .Select(l => Observable.FromAsync(DoWork))
                .Concat()
                .Subscribe();

In the code above, I feed the IScheduler in both Interval & ObserveOn from a SchedulerProvider so that I can unit test faster (TestScheduler.AdvanceBy). Also, DoWork is an async method.

In my particular case, I want the DoWork function to be called every 5 seconds. The issue here is that I want the 5 seconds to be the time between the end of DoWork and the start of the other. So if DoWork takes more than 5 seconds to execute, let's say 10 seconds, the first call would be at 5 seconds and the second call at 15 seconds.

Unfortunately, the following test proves it does not behave like that:

[Fact]
public void MultiPluginStatusHelperShouldWaitForNextQuery()
{    
    m_queryHelperMock
        .Setup(x => x.CustomQueryAsync())
        .Callback(() => Thread.Sleep(10000))
        .Returns(Task.FromResult(new QueryCompletedEventData()))
        .Verifiable()
    ;

    var multiPluginStatusHelper = m_container.GetInstance<IMultiPluginStatusHelper>();
    multiPluginStatusHelper.MillisecondsInterval = 5000;
    m_testSchedulerProvider.EventLoopScheduler.AdvanceBy(TimeSpan.FromMilliseconds(5000).Ticks);
    m_testSchedulerProvider.EventLoopScheduler.AdvanceBy(TimeSpan.FromMilliseconds(5000).Ticks);

    m_queryHelperMock.Verify(x => x.CustomQueryAsync(), Times.Once);
}

The DoWork calls the CustomQueryAsync and the test fails saying that is was called twice. It should only be called once because of the delay forced with .Callback(() => Thread.Sleep(1000)).

What am I doing wrong here ?

My actual implementation comes from this example.

Mock instance isn't using property code

I have some Django models I need some unit test coverage on and in doing so I mock out some instances of them. Here is an example class I want coverage of

class MyMixin(object):
    @property
    def sum(self):
        return field_one + field_two + field_three

class MyModel(Model, MyMixin):

    field_one = IntegerField()
    field_two = IntegerField()
    field_three = IntegerField()

So I can mock out an instance of it like so:

mock_inst = mock.Mock(spec=MyModel, field_one=1, field_two=2, field_3=3)

However when I go to execute mock_inst.sum, it doesn't execute the code properly, it gives me something from the mock class. Shouldn't it execute the code given the spec in the instance?

Using Google Mocks, how to I give a mock implementation without caring about / setting any expectation of invocation

I have an interface class say:

class MyInterface
{
    virtual int doThing(int x, int y, int z) = 0;
};

I want to write a mock implementation for use in my tests. E.g.Traditionally, without using Google Mocks, I would write say:

class class MyMock: public MyInterface
{
    virtual int doThing(int x, int y, int z)
    {
        if (x == 1)
            return y + z;
        else
            return y - z;
    }
};

How would I do this in google mocks. Please note, I don't want to (Ok, I don't need to) set an expectation about how this mock is called. I'm just using it to test something else.

How would you do it (and what is the most clearest way)? I find the google mocks documentation a little too concise to figure this out.

Unit Test fatalError in Swift

How to implement unit test for a fatalError code path in Swift?

For example, I've the following swift code

func divide(x: Float, by y: Float) -> Float {

    guard y != 0 else {
        fatalError("Zero division")
    }

    return x / y
}

I want to unit test the case when y = 0.

Note, I want to use fatalError not any other assertion function.

Mockito Mocking Android Context PackageManager Exception

I'm starting out with Mockito for Android headless Unit Test. The part I want to test is in the backend that depends on Context. I tried mocking the Context but I get null when I run the test.

The mocked Context seen in this example doesn't show me how it is mocked: http://ift.tt/1MOtBq3

The example from mentioned in link above (http://ift.tt/1FIrBxF) has no example of how the context is mocked.

So I'm a little lost.

I have the following in my gradle dependencies:

testCompile 'junit:junit:4.12'
    androidTestCompile 'com.android.support.test:runner:0.4'
    androidTestCompile 'com.android.support.test:rules:0.4'
    androidTestCompile 'com.android.support:support-annotations:23.0.1'
    testCompile 'org.mockito:mockito-core:1.10.19'
    androidTestCompile('com.android.support.test:testing-support-lib:0.+')

Snippet of code:

@RunWith(MockitoJUnitRunner.cass)
public static Test {
  @Mock Context mContext;
  RequestQueue mQueue;

@Test public void getCategories(){
final String SERVER = "http://{...}/categories";
mContext = mock(Context.class);
int size = 20;
when(mContext.getResources().getInteger(R.integer.cache_size)).thenReturn(size);
mQueue = VolleyUtil.newRequestQueue(mContext, null, size);

final Response.Listener<String> listener = new Response.Listener(){
//...
}

Response.ErrorListener error = new Response.ErrorListener(){
///...
}

mQueue.add(new JsonRequest(SERVER, listener, error);
}

VolleyUtil.newRequestQueue(final Context context, HttpStack stack, final int cacheMB){
final File cacheDir = new File(context.getCacheDir(), "volley");
  if(stack == null) {
     if(Build.VERSION.SDK_INT >= 9) {
        stack = new HurlStack();
     } else {
        String userAgent = "volley/0";

        try {
           final String network = context.getPackageName();
           final PackageInfo queue = context.getPackageManager().getPackageInfo(network, 0);
           userAgent = network + "/" + queue.versionCode;
        } catch (PackageManager.NameNotFoundException e) {
           e.printStacktrace();
        }

        stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
     }
  }

  final BasicNetwork network = new BasicNetwork((HttpStack)stack);
  final RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir,diskCacheMB*1000000), network);
  queue.start();
  return queue;

}

My null exception happens at:

final PackageInfo queue = context.getPackageManager().getPackageInfo(network, 0);

Am I suppose to mock package manager or the Application instance?

How to write a test for a network listener?

I am currently writing a piece of C++ code that will listen for network connections. I'm using gtest to write unit tests but I've reached a problem I don't know how to fix:

How can I test the functions that listen for network connections? If I put this in my unit test they'll block.

Any ideas?

angular JS mock json unit testing

var app= angular.module('app', []);

below is my factory method that will get the data from the sample.json

app.factory('factoryGetJSONFile', function($http) {
  return {
    getMyData: function(done) {
      $http.get('mock/sample.json')
      .success(function(data) {
        done(data);
      })
      .error(function(error) {
        alert('An error occured whilst trying to retrieve your data');
      });
    }
  }
});

below is my controller. I can able to access the service data in my controller

app.controller('homeController', ['$scope', 'factoryGetJSONFile', function ($scope, factoryGetJSONFile) {

    factoryGetJSONFile.getMyData(function (data) {
        $scope.name = data.projectDetails.name;
        $scope.duration = data.projectDetails.duration;
        console.log($scope.name+ " and duration is " + $scope.duration);
    });

}]);

Below is my sample.json

{
    "status": 200,
    "projectDetails": {
        "name": "Project Name",
        "duration": "4 Months",
        "business_place": "Dummy address"
    }
}

How to write the unit test cases for the above get service. I would like to test projectDetails.name in my test cases.

Unti testing ajax, check returned json [Laravel 5 ]

I want to test my controller, I need to see which json returned:

Here is my controller method:

public function update(UpdateCommentRequest $request)
{
        if(Input::get('task') == 'updateComment')
        {
            if(Comment::find(Input::get('id'))
                    ->update(['text' => $request->get('text')]))
            {
                return json_encode(array('success' => true));
            }
        }

}

Here is my test try:

public function testHome()
    {
        $this->be(User::find(1));
        $response = $this->call('PUT', '/api/project/1/comment/1/comments/1', array(
            'text' => 'testjjjjjjjjjjjjjjjjjjjjjjjjjjjj',
            'projectID' => 1,
            'id' => 246,
            'level' => 0
        ));
        dd($response);

Even if my test pass it will not return any content... What is right way to assert if I get succesful ajax request?

Android Studio / Gradle Running NDK JNI Unit Tests

Is there a better way to run NDK Lib / JNI unit tests on device using Android Studio & Gradle?

Currently;

Build all NDK source and then use adb to push the built files to a tmp directory on device

e.g

adb push libs/armeabi/* /data/local/tmp; done

Run all of the tests on device

adb shell "LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/run_all_unitTests"

Extract results

adb pull /data/local/tmp/test_results.xml 

How to debug something that works under Java 7 but not under Java 8

I have a project with a unit test that works under Java 7, but not under Java 8. Is there a good way to investigate such things? (I'm sure that the test is correct; this suggests that there's a subtle bug in the implementation.)

Really I suppose what I would like is a quick way to identify where the code paths diverge. But this is hard, because there might be all sorts of differences in the code paths at a very low level through the JDK, and I don't want to get bogged down in irrelevant differences that are down to tiny optimisations.

So the nice thing would maybe be to ask at what top level trace the paths diverge; and then, starting from just before that point, to ask at what second level trace the paths diverge; and so on.

But I've no idea whether there's a way to do this. I fear I could waste a lot of time if I don't have a systematic approach.

The code, by the way, is the Apache Phoenix repository, where under Java 8, I get the following failure:

Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec <<< FAILURE! - in org.apache.phoenix.schema.PMetaDataImplTest
testEviction(org.apache.phoenix.schema.PMetaDataImplTest)  Time elapsed: 0.006 sec  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<2>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:834)
    at org.junit.Assert.assertEquals(Assert.java:645)
    at org.junit.Assert.assertEquals(Assert.java:631)
    at org.apache.phoenix.schema.PMetaDataImplTest.testEviction(PMetaDataImplTest.java:98)

Can Karma refresh the file changes without running the whole suite again?

I am using Karma through Grunt. We have around 1000 tests and it is a bit painful to have them all run whenever we change a file (autoWatch = true).

This is what we are doing now:

  1. Start Karma with singleRun=false, autoWatch=false.
  2. Open the debug page and grep for a specific suite (using mocha html reporter).
  3. Change a test or file related to that suite.
  4. Refresh the debug page to run the set of tests again.
  5. My changes in (3) haven't been picked up by Karma so the tests still behave as if nothing had changed.

This is what I need:

  1. Start Karma with singleRun=false, magicOption=true.
  2. Open the debug page and grep for a specific suite (using mocha html reporter).
  3. Change a test or file related to that suite.
  4. Refresh the debug page to run the set of tests again.
  5. My changes are porperly picked up and only the grepped tests are run.

If I set autoWatch=true I get what I need but the whole suite of 1000 tests is run in the background whenever I change a file, which soon collapses my environment.

I don't think there is anything equivalent to magicOption according to Karma docs but, is there any way to achieve the same behaviour?

Thanks a lot.

Is it bad practice to debug unit tests?

A colleague of mine mentioned that it is bad practice to debug unit tests and I'm wondering if this is true. If it is, why is this considered bad practice?

Unit Testing Insert Empty Record Should Throw Error

I am new to unit testing and I am trying to test that a new Employee record should not be inserted into the database. When I call Context.SaveChanges() it does not throw the error in the unit test, but when I try it out in the Controller it throws an error like expected.

I am guessing that the Employee entity isn't being added to the context in the unit test so when I call SaveChanges() nothing is actually being saved? Any help would be appreciated!

Unit Test

[Test]
[ExpectedException(typeof(DbEntityValidationException))]
public void ShouldNotSaveEmptyEmployee()
{
    var mockSet = new Mock<DbSet<Employee>>();

    var mockContext = new Mock<SqlContext>();
    mockContext.Setup(m => m.Employees).Returns(mockSet.Object);

    var sut = new EmployeeRepository(mockContext.Object);
    sut.Save(new Employee());
}

Repository:

public void Save(Employee employee)
{
    if (employee.EmployeeId > 0)
    {
        Context.Entry(employee).State = EntityState.Modified;
    }
    else
    {
        Context.Employees.Add(employee);
    }

    Context.SaveChanges();
}

Wrtigin VBA Unit Testing Suite

I am tired trying to write a addin that will let me write Unit tests for VBA. I Have a class setup for testing assertions etc. what I am missing is a way to stubing/mocking classes is there a way of doing that?

For clarification I am talking about instantiatinng objects that are pretending to be of a different class. For example, I want to write a mock of Address class that Person class will work with:

Person Class

 Private Home as Address

 Sub ChangeAddress(NewStreetAddress as String)
      Address.Street = NewStreetAddress
 end Sub

Address Class

Dim Street as String

Getting the function name (__FUNCTION__) from a class name and a pointer to member function

I am writing a unit test library and I need to log the name of the test function during the assertion, like as follows:

struct my_test_case : public unit_test::test {
    void some_test()
    {
        assert_test(false, "test failed.");
    }
};

When I run the test case, I want to produce an output like:

ASSERTION FAILED (&my_test_case::some_test()): test failed.

I know there are some ways to solve this issue:

  1. Give __FUNCTION__ to assert_true()

  2. Define a macro like ASSERT(a, b) that expands to assert_true(a, b, __FUNCTION__)

  3. Define a macro like TEST to cache the __FUNCTION__ in the test function:

    struct my_test_case : public unit_test::test { void some_test() { TEST assert_test(false, "test failed."); } };

But these are error-prone and ugly solutions. Are there any other solutions to this problem?

Unit test rails associations with mocks

I have class that is responsible for doing some processing:

class Processor < ActiveRecord::Base
  def process
    # do some stuff
  end
end

and another class that is responsible for executing every processor and concatenating result. It's important to execute processors in correct order:

class ProcessorSet < ActiveRecord::Base
  has_many :processors, -> { order(:priority) }

  def process
    processors.map do |processor|
      processor.process
    end.join
  end      
end

How to test ProcessorSet.process method without having dependency with Processor (I don't want to test Process.process method, because it's already tested in another test unit). I've tried to mock processors method using mocha:

class ProcessorSetTest < ActiveSupport::TestCase
  processor_set = ProcessorSet.new

  processors = ['result1', 'result2'].map do |result|
    object = mock()
    object.expects(:process).returns(result)
    object
  end

  processor_set.expects(:processors).returns(processors)

  assert_equal 'result1result2', processor_set.process
end

But how in such case test that processors are returned in correct order?

How can I generate a spy for an interface with Mockito without implementing a stub class?

So I have the following interface:

public interface IFragmentOrchestrator {
    void replaceFragment(Fragment newFragment, AppAddress address);
}

How can I create a spy with mockito that allows me to hook ArgumentCaptor-objects to calls to replaceFragment()?

I tried

    IFragmentOrchestrator orchestrator = spy(mock(IFragmentOrchestrator.class));

But mockito complains with "Mockito can only mock visible & non-final classes."

The only solution I've come up with so far is to implement an actual mock of the interface before I create the spy. But that kind of defeats the purpose of a mocking framework:

public static class EmptyFragmentOrchestrator implements IFragmentOrchestrator {
    @Override
    public void replaceFragment(Fragment newFragment, AppAddress address) {

    }
}

public IFragmentOrchestrator getSpyObject() {
    return spy(new EmptyFragmentOrchestrator());
}

Am I missing something fundamental? I've been looking through the docs without finding anything (but I may be blind).

property-based-test for simple object validation

consider this simple example:

  • Theres a Person object
  • It must have one of: FirstName and LastName (or both, but one is mandatory)
  • It must have a valid Age (integer, between 0 and 150)

How would you property-base-test this simple case?

Make sure function cannot be called during tests

First of all: Feel free to tell me that this is an antipattern!

In my code, I have some functions responsible for calling external API's. This is a prime candidate for mocking in the tests to make sure that the external API is not hit when tests are run.

The thing is, the way mocking works in python (at least the way I have been taught), we mock a position in the imported module structure explicitly, e.g.

import mymodule

def test_api():
    mocker.patch('mymodule.mysubmodule.json_apis.my_api_wrapper_function')
    [...]

This will mock out the my_api_wrapper_function function for the test. However, what if refactoring moves the function or renames it, etc.? If the test is not updated, it will most likely pass, AND the external API is hit, because the new location of the function has not been mocked.

I see two solutions to this question, but I am not sure how to implement any of them

  • Mock stuff in a better way, so that I am sure not to have problems when refactoring
  • Create a decorator, which will wrap a function and raise an exception if the function is called in a test context (I suppose this depends on the test runner that is used? In my case, it is pytest)

How to prevent database effect in C# MVC Unit testing?

when I applying the unit testing on Insert, Update and Delete operation. At that time the record inserted, updated and deleted in the database as well.

How it possible? Can anyone give the solution to prevent the database effect?

Thanks,

Python - How to test code which uses an object and its methods using py.test?

I am trying to unit-test the following code with py.test:

def get_service_data():    
    client = requests.session()
    # some task on the client object

    resp = client.get('http://ift.tt/1VmBZCg', params={...})
    temp_json = resp.json()

    result = []
    # lots of processing on temp_json
    return result

However, if I monkeypatch requests.session(), the mocked object will not have the get() method. If I patch requests.session.get(), then the test code will output another error message:

>       monkeypatch.setattr('requests.session.get', lambda: {})
E       Failed: object <function session at 0x1046e3500> has no attribute 'get'

How should I test the above code using py.test?

Using TestNG as external interface to test web services

I'm new to TestNG and want to create a test suite with it to test any webservice I want so that I don't require to create the test suite again n again for other web services. Can someone help me with this? Appreciate any examples.

JOOQ MockDataProvider - how to return different mocks depending on some conditions?

Recenty I was implementing unit test using JOOQ MockDataProvider. When I wanted to use my mock provider in DAO with many selects I had to use many if-else statements. According to: http://ift.tt/1O7oFNU , I just need to check if my SQL starts with some query. And my SQLs used in DAO could start with 3 different ways, so I used pretty complex if-else construct. Second thing - I wanted my MockDataProvider to return some mock result only when SQL is executed for the first time, and then to not return any results - in DAO iterated in loop 5 times, and each time my DAO should check something in database.I had no idea how to mock such behaviour, so I used simple counter - but it looks awful, and I want it to be implemented in a good way. Here is my code:

public class SomeProvider implements MockDataProvider {

private static final String STATEMENT_NOT_SUPPORTED_ = "Statement not supported: ";
private static final String SELECT_META = "select \"myschema\".\"meta\".";
private static final String SELECT_CLIENT = "select \"myschema\".\"client\".";
private static final String SELECT_KEY = "select \"myschema\".\"key\".";
private static final String TEST_SECRET_KEY = "some key";
private static final String KEY = "40sld";
private static final String DROP = "DROP";
private static final String SOME_URL = "something";
private static final String MONKEY = "monkey";
private static final int FIRST_ITERARION_COUNTER_VALUE = 0;
private final Long keyId;
int counter = 0;

public SomeProvider(Long keyId) {
    this.keyId = keyId;
}

@Override
public MockResult[] execute(MockExecuteContext ctx) throws SQLException {

    DSLContext create = DSL.using(SQLDialect.POSTGRES);
    MockResult[] mock = new MockResult[3];
    String sql = ctx.sql();
    if (sql.toUpperCase().startsWith(DROP)) {
        throw new SQLException(STATEMENT_NOT_SUPPORTED_ + sql);
    } else if (sql.startsWith(SELECT_CLIENT)) {

        Result<ClientRecord> result = create.newResult(CLIENT);
        result.add(create.newRecord(CLIENT));
        result.get(0).setValue(CLIENT.ID, 1L);
        result.get(0).setValue(CLIENT.SECRET_KEY, TEST_SECRET_KEY);
        mock[0] = new MockResult(1, result);

    } else if (sql.startsWith(SELECT_META)) {

        Result<MetaRecord> metaResult = create.newResult(META);
        metaResult.add(create.newRecord(META));
        metaResult.get(0).setValue(META.ID, 1L);

        metaResult.get(0).setValue(META.URL, SOME_URL);
        metaResult.get(0).setValue(META.KEY, KEY);
        metaResult.get(0).setValue(META.OPTION, keyId);
        mock[0] = new MockResult(1, metaResult);

    } else if (sql.startsWith(SELECT_KEY)) {

        Result<KeyRecord> keyResult = create.newResult(KEY);
        if (counter == FIRST_ITERARION_COUNTER_VALUE) {
            // first SELECT returns monkey, rest will return no results
            keyResult.add(create.newRecord(KEY));
            keyResult.get(0).setValue(KEY.ID, 1L);
            keyResult.get(0).setValue(KEY.VALUE, MONKEY);
            mock[0] = new MockResult(1, keyResult);
        } else {
            mock[0] = new MockResult(0, keyResult);
        }
        counter++;
    }

    return mock;
}

}

It works but looks bad. To sum up my question is: How to return (using one provider) different results depending on the query and the number of query execution. Maybe this class is only for simple DSLContext mocking, not to mock whole DAO which uses many queries many times using one DSLContext.

Code generated by Lombok and Jacoco unit test coverage

I am using Lombok in several of our company projects and we are also running analysis using Sonarqube with Jacoco plugin to test the code quality and unit test coverage.

The problem is, that Sonarqube complains about low coverage in all of the classes using Lombok @Data annotation since the code generated by Lombok is not tested.

There is no point of testing the generated code (like equals, hashCode etc.) and I don't want to delombok the project just for sake of increasing the coverage.

Is there some easy way how to skip testing the Lombok generated code completely (for example some annotation specifying that the @Data annotation should not be applied)?

SpringBoot and DynamoDb-Local Embedded

I have a spring-boot (1.2.6) webapp. I use DynamoDb as an event store for the app. In my integration tests I would like to use this approach to start up DynamoDb-Local from my integration test code.
However, after including the dependency:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>DynamoDBLocal</artifactId>
    <version>1.10.5.1</version>
</dependency>

I will get the following error when running integration test:

java.lang.IllegalStateException: Failed to load ApplicationContext
(....)
Caused by: org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jettyEmbeddedServletContainerFactory' defined in class path resource [org/springframework/boot/autoconfigure/web/EmbeddedServletContainerAutoConfiguration$EmbeddedJetty.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory]: Factory method 'jettyEmbeddedServletContainerFactory' threw exception; nested exception is java.lang.NoClassDefFoundError: org/eclipse/jetty/webapp/WebAppContext
(....)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jettyEmbeddedServletContainerFactory' defined in class path resource [org/springframework/boot/autoconfigure/web/EmbeddedServletContainerAutoConfiguration$EmbeddedJetty.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory]: Factory method 'jettyEmbeddedServletContainerFactory' threw exception; nested exception is java.lang.NoClassDefFoundError: org/eclipse/jetty/webapp/WebAppContext
(....)
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory]: Factory method 'jettyEmbeddedServletContainerFactory' threw exception; nested exception is java.lang.NoClassDefFoundError: org/eclipse/jetty/webapp/WebAppContext
(....)
Caused by: java.lang.NoClassDefFoundError: org/eclipse/jetty/webapp/WebAppContext
(....)
Caused by: java.lang.ClassNotFoundException: org.eclipse.jetty.webapp.WebAppContext

I did not even add any code to my integration test, I literally just added the repo and dependency to my POM (as described in the AWS forum announcement linked above). Without this dependency everything runs just fine. Any ideas?

Xcode 8 Performance Test Measurements

I've written some unit tests to measure the time it takes for my app to create and load view controllers using a measure block. This gives me a result which displays a grey tick, the standard deviation and the whether the result is better or worse than my current baseline.

This information is great, but is there anyway to make the test fail if the measureBlock runs a certain percentage worse than the baseline? I'd like to add these tests to my automated test suite but I haven't found a way to log these results programatically or cause the tests to fail under certain conditions yet.

When to unit-test

We have built a small custom website using a very popular framework. The project is small basic crud-application that allow users to upload pictures into their private image galleries.

The project is so small that any experienced developer should be very comfortable doing it alone in about 250 hours.

The client was on a tight budget.

  1. Should unit tests always be written during first phase of development, or is it safe to say that it's acceptable to write unit tests before we start improving on the project?

  2. Would unit testing while developing the first phase contribute to the overall cost for the client in phase 1? We're using a combination of Doctrine, Symphony, TWIG, requirejs, jQuery and other tools.

Basically; the client is complaining for lack of unit tests, while we feel that we simply postponed the additional cost - in benefit of the client. The client is now making money on the project, and can afford to pay for writing the unit tests. He is now trying to get the unit tests for free.

EasyMock: call order on mocks created with @Mock

Is there any way to verify methods call order between mocks if they are created with @Mock annotation?

As described in documentation it can be done with a mock control. But EasyMockRule does not expose control object. I have looked at EasyMockSupport implementation, but have not found way to force it to use one control for all injected mocks. :(

public class Test extends EasyMockSupport {

 @Rule
 public EasyMockRule mocks = new EasyMockRule(this);

 @Mock
 private SomeClass first;

 @Mock
 private OtherClass second;

 @TestSubject
 private UnderTest subject = new UnderTest ();

 @Test
 public void test() {
   expect(first.call());
   expect(second.call());
   ....
   //Verify that calls were in order first.call(), second.call()
 }
}

How to unit test throwing functions in Swift?

How to test wether a function in Swift 2.0 throws or not? How to assert that the correct ErrorType is thrown?

XCTestSuite of XCTestCase

I need to test a UIViewController which behavior depends on parameters given to it (controls are dynamically instantied in viewDidLoad based on a webservice call).

I would to be able to run the same XCTestCase derived class and inject testing the context. I thought I would use XCTestSuite for that but this is not the case as XCTestSuite is a suite of XCTest and not XCTestCase.

Basically I would like to do:

XCTestCaseSuite* suite = [[XCTestCaseSuite alloc] init];
for (Condition* condition in conditions) {
  MyTestCase* case = [[MyTestCase alloc] initWithCondition:condition];
  [suite addTestCase:case];
]
[suite runTest];

Is there any way to do this? Thanks!

Unit testing a routes function - Node.js

I'm trying to add a unit test for the function that handles the route. My problem is that the done function should wait for the completion of the asynchronous calls but does not.

I've tried the solution that was suggested in this SO question but it did not work. I'm not sure if it matters but I am using proxyquire to stub out some of the dependencies that make requests to remote hosts and to databases. My code is below:

app.js:

var foo = require('foo');
app.get('/test', foo.test);

foo.js:

var requestPromise = require('request-promise');

module exports.test = function(req, res) {
    var options = {
        uri: 'http://ift.tt/1P51p2t',
        method: 'GET'
    }

    var promise = requestPromise(options);
    promise.then(function(response) {
        res.status(200);
        res.json(response);
    });

}

My mocha test is below:

describe("foo-test", function () {


    var login = proxyquire("foo",
        {
            "request-promise": function (url) {
                return Q.fcall(function () {
                        return {id: 'dummy_facebook_id'};
                    }
                )
            },
        }
    );

    it("Should check response after asynchronous method completes", function(done) {
        var responseSpy = {
            jsonResponse: {},
            json: function(body) {
                this.jsonResponse = body;
                done();
            },
            status: sinon.spy()

        };

        login.test("blah", responseSpy);

        console.log(responseSpy);
    });
})

I'm expecting the 'console.log' line in the test to be executed after the promise in the code being tested is executed, unfortunately that's not the case.

mardi 29 septembre 2015

How to compare two anonymous types or two collection of different types using SemanticComparison

1. Is there a simple way to compare two anonymous types using SemanticComparison from AutoFixture ? My current issue is that I can't construct Likeness for second anonymous object. Simplified example :

var srcAnon = new { time = expectedTime, data = docsArray };
var resultAnon = new { time=actualTime, data = sutResponseArray };

var expectedAlike = srcAnon.AsSource()
            .OfLikeness<??WhatsHere??>()

2. I think this question is pretty related to the first one as they both use SemanticComparison to create IEquatable implementations. In this question Mark Seemann provided an answer on how to do it using MSTest assertions and LINQ SequenceEqual method.

Is this possible to use XUnit2 assertions library in the similar scenario? XUnit supports Assert.Equal() for collections of the same type, can it be used for collections of different types, but if the elements implement IEquatable (using Likeness). Something like this (this doesn't work as result and allLikeness have different types):

Assert.Equal(allLikeness.ToArray(),result.ToArray());

Unit Test Session doesn't show tests from new test class

  • VS2013 Update 5
  • Created C# MVC Web Application project including a Test Project.
  • Updated all packages...

I moved my Models namespace, all model classes and the ApplicationDbContext into a new class library, separate from the MVC Web App project. I added a reference to the Test Project for this.

The Test Project shows a 'Controllers' folder and in that folder is HomeControllerTest.cs

I wanted to add tests for my Model classes, so I added a 'Models' folder and a very similar cs file for testing my models. The class and methods are public, and the appropriate attributes for the class [TestClass] and methods [TestMethod] are assigned

I added Tests for Insert, Get, Update and Delete for one of my models.

Everything compiles fine.

The Unit Test Sessions show no tests from this newly added class; see image... I've restarted VS2013, cleaned solution, rebuilt...everything...except that which will make it work.

How do I get the newly added tests to be run-able via, or visible to, the Unit Test Sessions window??

enter image description here

How to use a resource file in Visual Studio 2015

I'm creating integration tests (web class library project) for my ASP5 MVC6 application. I want to add a resource file with raw SQL to the test project for DB preparation.

I wanted to use good old resources (.resx). But I it's not available in add new item project's menu.

I found this answer pointing to a github repo. I guess this is where he reads the file:

using (var stream = FileProvider.GetFileInfo("compiler/resources/layout.html").CreateReadStream())
                using (var streamReader = new StreamReader(stream))
                {
                    return streamReader.ReadToEnd().Replace("~", $"{basePath}/compiler/resources");
                }

I tried using System.IO.File.ReadAllText("compiler/resources/my.sql") in my test helper project in TestHelper class.

When I used the TestHelper class in my actual test project it was looking for the the file in test project directory.

TestProject/compiler/resources/my.sql insetad of TestHelperProject/compiler/resources/my.sql

I can figure out a couple of workarounds. However I'd like to do it the right way. Preferably like I would do in with a resx file:

string sql = Resources.MySql;

Any suggestions?

Mocking multiple return values for a function that returns a success/error style promise?

NB: Code reproduced from memory.

I have a method generated by djangoAngular that has this signature in my service:

angular.module('myModule')
.service('PythonDataService',['djangoRMI',function(djangoRMI){
     return {getData:getData};

     function getData(foo,bar,callback){
         var in_data = {'baz':foo,'bing':bar};
         djangoRMI.getPythonData(in_data)
         .success(function(out_data) {
            if(out_data['var1']){
                 callback(out_data['var1']);
             }else if(out_data['var2']){
                 callback(out_data['var2']);
            }
         }).error(function(e){
            console.log(e)
         });    
    };
}])

I want to test my service in Jasmine, and so I have to mock my djangoAngular method. I want to call through and have it return multiple datum.

This is (sort of) what I have tried so far, reproduced from memory:

describe('Python Data Service',function(){
    var mockDjangoRMI,
    beforeEach(module('ng.django.rmi'));
    beforeEach(function() {
        mockDjangoRMI = {
            getPythonData:jasmine.createSpy('getPythonData').and.returnValue({
                success:function(fn){fn(mockData);return this.error},
                error:function(fn){fn();return}
            })
        }
        module(function($provide) {
            $provide.provide('djangoRMI', mockDjangoRMI);
       });
   });
   it('should get the data',function(){
       mockData = {'var1':'Hello Stackexchange'};
       var callback = jasmine.createSpy();
       PythonDataService.getData(1,2,callback);
       expect(callback).toHaveBeenCalled();
   })
})

But when I put another it block in with a different value for mockData, only one of them is picked up.

I'm guessing that because of the order of operation something is not right with how I'm assigning mockData. How can I mock multiple datum into my djangoRMI function?

Wait for page redirection in Protractor / Webdriver

I have a test that clicks a button and redirects to a user dashboard. When this happens Webdriver returns: javascript error: document unloaded while waiting for result.

To fix this I insert browser.sleep(2000) at the point where redirection occurs and assuming my CPU usage is low, this solves the issue. However, 2000 ms is arbitrary and slow. Is there something like browser.waitForAngular() that will wait for the angular to load on the redirected page before the expect(..)?

it('should create a new user', () => {
  $signUp.click();
  $email.sendKeys((new Date().getTime()) + '@.com');
  $password.sendKeys('12345');
  $submit.click();

  browser.sleep(2000); // Need alternative to sleep...

  // This doesn't do it...
  // browser.sleep(1);
  // browser.waitForAngular();

  $body.evaluate('user')
  .then((user) => {
    expect(user).toBe(true);
  });
});

Unit Testing Ignore owner from H2 database

I would like to test multiple methods without having restrictions to access the database. There was legacy code which has the query coded and I was wondering if H2 can ignore those prefixes owner while testing.

For example: in the code...

..
String q = "SELECT user FROM admin_dba.Empolyees where id < ? and id > 25";
try {
    con = DataSourceUtils.getConnection(dataSource);

            pst = con.prepareStatement(q);
            pst.setLong(1,  id);
            rs = pst.executeQuery();
...

as for testing am I able to ignore that admin_dba in H2?

PHPUnit how to implement Query and DB Datasets?

I have seen many examples/tutorials for how to test the database with PHPUnit using the xml, php arrays, etc. datasets. I am struggling trying to find good examples using Query (SQL) and/or Database (DB) Datasets.

I like this implementation, but it also uses xml.

  • In setUp() it drops the table and then recreates it. I would probably use TRUNCATE instead.

From the PHPUnit docs:

For database assertions you do not only need the file-based datasets but also a Query/SQL based Dataset that contains the actual contents of the database.

Does this mean that it is required to have some sort of file that loads the data in the database?

  • If so, I have seen examples which use a file for each test, or one big file. Would it not make sense to have a file for each test class?

Or is it possible to have the data that the tests expect already in the test database?

Lastly, is the only difference between Query and Database Datasets that the Query Datasets get specific tables and the Database Datasets get all of the tables in the test database?

unit testing a sum in rails model

I want to create a test which tests if my method is working correctly.

class Contest < ActiveRecord::Base
  has_many :submissions 

  def tonnage
    self.submissions.sum(:tonnage)
  end

end

When I test (minitest) I get following error:

FAIL["test_0003_must have tonnage", #<Class:0x007fbcb9fac260>, 2015-09-29 21:23:47 +0200]
 test_0003_must have tonnage#Contest (1443554627.17s)
        --- expected
        +++ actual
        @@ -1 +1 @@
        -440
        +#<BigDecimal:7fbcbc9ac150,'0.0',9(27)>
        test/models/contest_test.rb:17:in `block (2 levels) in <top (required)>'

My test looks like this (minitest):

describe Contest do
  let(:contest) { Contest.create(name: "test",
                                 admin_id: 1,) }

  it "must have tonnage" do
    contest.tonnage.must_equal 440
  end


end

What does the test output mean (the failure) and what is the proper way to unit test this? I assume my method in my model is correct since it is working.

Attempting to use ApacheDS for unit testing a Spring AD Auth Provider

I'm trying to create unit tests for my application to ensure that the authentication provider is returning predictable results. I've verified that it works against the company's AD, but I'd like to make an embedded test too. I saw that Spring Security uses the ApacheDS service and I'm trying to implement it in my code. I created an ldif file that's setting up my defaults, but I'm getting some warnings on some MS-specific attributes.

2015-09-29 14:28:04 WARN  org.apache.directory.server.schema.registries.DefaultOidRegistry,148 - OID for name 'samaccountname' was not found within the OID registry
2015-09-29 14:28:04 WARN  org.apache.directory.server.core.entry.DefaultServerEntry,307 - The attribute 'samaccountname' cannot be stored
2015-09-29 14:28:04 WARN  org.apache.directory.server.schema.registries.DefaultOidRegistry,148 - OID for name 'userprincipalname' was not found within the OID registry
2015-09-29 14:28:04 WARN  org.apache.directory.server.core.entry.DefaultServerEntry,307 - The attribute 'userprincipalname' cannot be stored
2015-09-29 14:28:04 WARN  org.apache.directory.server.schema.registries.DefaultOidRegistry,148 - OID for name 'samaccounttype' was not found within the OID registry
2015-09-29 14:28:04 WARN  org.apache.directory.server.core.entry.DefaultServerEntry,307 - The attribute 'samaccounttype' cannot be stored

I see this is referencing DefaultOidRegistry - is there a default MS scheme I can use instead? How can I set my test data in the ldif to import into the embedded ldap server?

How is header information and post data in rails tests set?

I'm running Ruby 1.9.3 and Rails 3.1. I can successfully manually test my usage of the Washout gem by sending myself XML payloads. I am trying to recreate the following bash command in a rails test:

curl -H "Content-Type: text/xml; charset=utf-8" -H "SOAPAction:soap_submit_contract" -d@test/fixtures/soap/success.xml http://localhost/soap/action

As you can see, it sets some header data and sends the data in the file test/fixtures/soap/success.xml

All the other examples I see for POSTs look like this:

post :parse_pdf,
  :terminal_key => @terminal_key,
  :contract_pdf => load_pdf(pdf)

But in my case the file data isn't a named parameter, and this doesn't seem to be setting the header information.

How can I submit a post exactly the same way the curl command does?

Our test suite is using the default ActionController::TestCase as such:

class SoapControllerTest < ActionController::TestCase
  fixtures :terminals

  test "soap_succeeds" do 
    # request.set_header_information ???
    post :action, # file data ???

    assert_match(/success/, @response.body)
  end

end

PowerMock and the class net.sf.ehcache.Cache

I am having bothering mocking the class net.sf.ehcache.Cache using PowerMock Here is the class I want to test

    package com.services.amazon;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import net.sf.ehcache.Cache;
import net.sf.ehcache.CacheManager;
import net.sf.ehcache.Element;

public class CacheServices {

    /** Local log variable. **/
    private static final Logger LOG = LoggerFactory.getLogger(CacheServices.class);

    private CacheManager cacheManager;    

    public void setCacheManager(CacheManager argCacheManager) {
        cacheManager = argCacheManager;
    }


    public boolean replaceItemInCache(String cacheName, Object cacheKey, Object cacheObject) {

        boolean result = false;

        Cache cache = cacheManager.getCache(cacheName);
        if (cache == null) {            
            return result;
        }

        int initialCacheSize1 = cache.getSize();

        Element retrievedCacheObject = cache.get(cacheKey);
        if (retrievedCacheObject != null) {
            LOG.info("Object exists for the cacheKey of {} - now removing from cache", cacheKey);
            boolean removeFromCacheResult = cache.remove(cacheKey);
        }

        int initialCacheSize2 = cache.getSize();        

        Element newCacheElement = new Element(cacheKey, cacheObject);
        cache.put(newCacheElement);
        int finalCacheSize = cache.getSize();

        Object[] logParams = new Object[]{initialCacheSize1, initialCacheSize2, finalCacheSize};
        LOG.info("initialCacheSize1:{}, initialCacheSize2:{}, finalCacheSize:{}", logParams);

        result = true;
        return result;
    }
}

And my test class

package com.services.amazon;

import static org.junit.Assert.assertTrue;

import org.easymock.EasyMock;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.easymock.PowerMock;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

import net.sf.ehcache.Cache;
import net.sf.ehcache.CacheManager;
import net.sf.ehcache.Element;

@RunWith(PowerMockRunner.class)
@PrepareForTest({CacheServices.class, CacheManager.class, Cache.class, Element.class})
public class TestCacheServices {

    CacheServices cacheServices;
    CacheManager mockCacheManager;
    Cache mockCache;

    @Before
    public void setUp() {
        cacheServices = new CacheServices();
        mockCacheManager = PowerMock.createMock(CacheManager.class);
        mockCache = PowerMock.createMock(Cache.class);

        cacheServices.setCacheManager(mockCacheManager);
    }


    @Test
    public void testReplaceItemInCache_SuccessObjectExistsInCache() {    
        String cacheName = "cache";
        String cacheKey = "cache";
        Object cacheObject = new Object();

        Element dummyElement = new Element("Key", "value");

        EasyMock.expect(mockCacheManager.getCache(EasyMock.anyString())).andReturn(mockCache);        
        EasyMock.expect(mockCache.get(EasyMock.anyObject())).andReturn(dummyElement);
        EasyMock.expect(mockCache.getSize()).andReturn(1).atLeastOnce();
        EasyMock.expect(mockCache.remove(EasyMock.anyObject())).andReturn(true);
        mockCache.put(EasyMock.isA(Element.class));
        EasyMock.expectLastCall();

        EasyMock.replay(mockCacheManager);
        PowerMock.replay(mockCache);       

        boolean result = cacheServices.replaceItemInCache(cacheName, cacheKey, cacheObject);
        assertTrue("Expected True but was " + result, result);

        EasyMock.verify(mockCacheManager);
        PowerMock.verify(mockCache);
    }
}

The error console I get from running the test is as follows

java.lang.AssertionError: 
  Unexpected method call Cache.get("cache"):
    Cache.get(<any>): expected: 1, actual: 0
    Cache.remove(<any>): expected: 1, actual: 0
    Cache.put(isA(net.sf.ehcache.Element)): expected: 1, actual: 0
    at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:44)
    at org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.invoke(EasyMockMethodInvocationControl.java:91)
    at org.powermock.core.MockGateway.doMethodCall(MockGateway.java:124)
    at org.powermock.core.MockGateway.methodCall(MockGateway.java:185)
    at net.sf.ehcache.Cache.get(Cache.java)
    at com.services.amazon.CacheServices.replaceItemInCache(CacheServices.java:48)
    at com.services.amazon.TestCacheServices.testReplaceItemInCache_SuccessObjectExistsInCache(TestCacheServices.java:53)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
    at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
    at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
    at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
    at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
    at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
    at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
    at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:106)
    at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
    at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)

Can anyone point out to me what I may be doing wrong? I understand the Cache class is final and I have read the relevant documentation

Thanks Damien

Testing for exception fails

When I save a User model, I would like to check if it has a username. Therefore I wrote this pre_save:

@receiver(pre_save, sender=User)
def validate_user(sender, instance, **kwargs):
    if len(instance.username) <= 5: raise Exception("Username too short")

Now in my testing method I would like to test this exception:

def test_user_no_username(self):
    u = User.objects.create()
    self.assertRaises(Exception, u.save())

The test fails. Why?

Google/Analytics.h file not found when Testing

I am including the BridgingHeader.h however on importing Google/Analytics.h it can't find the file.

This works fine in the application, it only throws the error under testing.

Not sure what the issue is so any insight would be greatly appreciated.

enter image description here

Python unit test, run some methods only once

I am using Django testcase for one testsuit like this:

class XXXTests(TestCase):
    def setUp(self):
        ....

    def test_something(self):
        ....

    def test_anthoerthing(self):
        ....

Now I notice that there are a lot of things repeated in test_something() and test_anotherthing() (mostly running some method and get the returned value).

Is there any way I can only run the repeated part once for the testsuit?

Robolectric filenotFound on asset files

Hi i am using robolectric version 3.0 for unit testing my app and have the asssets folder with some files inside under src/test/assets but keep getting fileNotFound.

Here is my test code:

```

@RunWith(RobolectricGradleTestRunner.class)
@Config(constants = BuildConfig.class, manifest = Config.NONE)
public class ShowsDatesTest {

@Test
public void testResponse(){
BufferedReader = getBufferedResponseFromFile("response.json");
}

private BufferedReader getBufferedResponseFromFile(String json) throws IOException {

         mContext = RuntimeEnvironment.application;
        InputStream jsonResponse = mContext.getAssets().open(json);

        BufferedReader br = new BufferedReader(new InputStreamReader(jsonResponse, "UTF-8"));

        return br;

    }

}

```

Here is my sourceset and dependencies from my gradle file: ```

 production {
            res.srcDir 'build-config/production/res'
            test.java.srcDirs += 'src/main/java'
            test.java.srcDirs += "build/generated/source/r/production"
            test.java.srcDirs += "build/generated/source/buildConfig/production"
            test.assets.srcDir file('src/test/assets')
        }

dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile 'com.android.support:appcompat-v7:19.1.+'
    compile 'com.google.code.gson:gson:2.3'
    testCompile('org.robolectric:robolectric:3.0') {
        exclude group: 'commons-logging', module: 'commons-logging'
        exclude group: 'org.apache.httpcomponents', module: 'httpclient'
    }
    compile 'com.fasterxml.jackson:jackson-parent:2.5'
    compile 'com.squareup:otto:1.3.6'
    compile 'com.jakewharton:butterknife:6.1.0'
    compile 'com.sothree.slidinguppanel:library:3.0.0'
    compile 'com.crashlytics.android:crashlytics:1.+'
    compile 'com.mcxiaoke.volley:library-aar:1.0.0'
    compile 'joda-time:joda-time:2.8.2'
    testCompile('junit:junit:4.12') {
        exclude module: 'hamcrest'
        exclude module: 'hamcrest-core'
    }
    testCompile 'org.hamcrest:hamcrest-all:1.3'
    compile 'com.sothree.slidinguppanel:library:3.0.0'
    compile 'com.squareup:otto:1.3.6'
    compile 'com.squareup.okhttp:okhttp:2.3.0'
    testCompile 'org.apache.maven:maven-ant-tasks:2.1.3'

    compile 'com.google.android.gms:play-services:7.0.0'

    compile 'com.android.support:multidex:1.0.0'

    compile 'com.android.support:recyclerview-v7:21.0.+'
    compile 'com.squareup.picasso:picasso:2.5.2'

```

production is a build varient

Sonarqube python plugin unit test drilldown

I have a python project analyzed with sonarqube, my project has some unit test inside it runned with nosetest, the sonarqube runner reads the XML generated by nose and shows me the statistic of the test (number of test, success, errors, failed and skipped).

When i click on the number of errors it redirects me to a page with the sources of the test and show me the number of test with error per file. Here is a picture:

http://ift.tt/1Vl6Q26

Also if i call the rest API of sonarqube to get the result of unit test following this question in stackoverflow: Is it possible to gather unit test list & results on SonarQube 4.5? i get all the information of each unit test (errors, traceback, etc...).

But i cant find this information (name of the unit test with errors, what is the error, trackebacks... Some kind of logs) inside sonarqube dashboard, after some test i've found this video: http://ift.tt/1Vl6Q28 so i think sonarqube can do it, but i dont seem to find the way.

Im using sonarqube 5.1.2 and python plugin 1.5

Thanks for any help you can provide!

AWS device farm with Espresso and JUnit4

I want to test my app in AWS farm, using

androidTestCompile 'com.android.support.test:runner:0.4'
androidTestCompile 'com.android.support.test:rules:0.4'
androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.1'
androidTestCompile 'com.android.support.test.espresso:espresso-intents:2.2.1'
androidTestCompile('com.android.support.test.espresso:espresso-contrib:2.2.1') {
    exclude group: 'com.android.support', module: 'appcompat'
    exclude group: 'com.android.support', module: 'support-v4'
    exclude module: 'recyclerview-v7'
}
androidTestCompile 'junit:junit:4.12'
androidTestCompile 'com.squareup.retrofit:retrofit-mock:1.9.0'
androidTestCompile 'com.squareup.assertj:assertj-android:1.1.0'
androidTestCompile 'com.squareup.spoon:spoon-client:1.2.0'

Sample test:

@RunWith(AndroidJUnit4.class) and run with AndroidJUnitRunner, I have my tests starting like:

@RunWith(AndroidJUnit4.class)
@LargeTest
public class EstimationActivityTests {

@Rule
public ActivityTestRule<LoginActivity> mActivityRule = new ActivityTestRule(LoginActivity.class);

@Before
public void setup() {
}

@Test
public void showsRightDataOnCreate() {
org.junit.Assert.assertEquals("asd", "asd");
}
}

But it just test teardown and setup suite tests... looks like it dont recognize the tests...

Another thing is that I´m creating the apk and test apk with gradlew.

#./gradlew assembleMockAndroidTest

and I upload the files in app-mock-androidTest-unaligned.apk and app-mock-unaligned.apk.

What´s wrong in my process?

Case: http://ift.tt/1FFtJ9q

Inheriting java config classes in unit tests in Spring

I'm writing unit tests for pure java-config styled application with pretty big amount of configuration classes. To test some high-level logic I have to import a pack of configs. So, finally, the context declaration looks kinda:

@ContextConfiguration(
    classes = {
            // Common application configurations
            BaseBusinessConfiguration.class, BusinessServicesConfiguration.class, 
            nts.trueip.gatekeeper.logic.configuration.ContextConfiguration.class,
            ControllersConfiguration.class, FactoriesConfiguration.class, CachingConfiguration.class,
            InterpretersConfiguration.class, UtilConfiguration.class, ValidatorsConfiguration.class,
            // Common test environment configurations
            MockedReposConfiguration.class, TestServicesConfiguration.class,
            // Local test configuration
            LogicTestConfiguration.class 
    }
    )

I have to specify them for every test class in the project, and the majority of them are the same all the time, only some specific configurations may vary. According to the @ContextConfiguration specification, it's possible to inherit locations and initializers from test superclass, but not classes.

Is there any practise to avoid so bulk configurations, moving some parts in superclasses/some side classes?

Share Scala class in test folder with Java tests in Maven

I have a Maven project with mixed Java and Scala code. I want to use an auxiliary class located in the scala test folder for Java tests. The file tree is like below, omitting packages:

+ test/
  + java/...
    - SomeTest.java
  + scala/...
    - Aux.scala
    - OtherTest.scala

I want to import code from Aux.scala for use in the SomeTest.java class. It works fine in my IDE, where all folders are marked as test folders. However when building this project in Maven I get an import error from the Java compiler.

How can I configure Maven to use the Scala test code for Java tests?

Assigning IMappingEngine in constructor causes mapping exception when mapping only when running from unit test

I have an unit test - where I do AutoMapperConfiguration in setup. I then set IMappingEngine as private property in the constructor in my class where I actually do mapping. Unit test fails if I use this property, but using the static method from automapper works fine. Both methods work fine when running the actual program. Only difference I can see is the unit tests are in a separate assembly. CLS compliance is turned on.

public class AutomapperConfiguration
{
    public static void Configure()
    {
        Mapper.Initialize(cfg =>
        {
            cfg.AddProfile<AclassMappingProfile>();
        });
    }
    public static void Reset()
    {
        Mapper.Reset();
    }
}
public class AssetModelFactoryTests
{
    [SetUp]
    public void SetUp()
    {
        AutomapperConfiguration.Configure();
    }
    [Test]
    public void TestA()
    {
        var a = new A();
    }
}

public class A
{
    private IMappingEngine _mappingEngine;
    public A()
    {
         _mappingEngine = Mapper.Engine;
    }

    public void DoA()
    {
         Mapper.Map<Destination>(source); //works
         _mappingEngine.Map<Destionation>(source); //Throws mapping not supported
    }
}

How to treat and test flow control if not with exceptions with c#?

What's the right way to treat and test flow control on methods that are void if not with exceptions? I've seen that Microsoft do not recomend such practice so what's the right way?

This is how how I'm treating parameters that shouldn't be accepted in my method:

    public void RentOutCar(ReservationInfo reservationInfo) 
    {
        try
        {
            if (string.IsNullOrEmpty(reservationInfo.ReservationNumber) || string.IsNullOrWhiteSpace(reservationInfo.ReservationNumber))
            {
                throw new ArgumentException("Reservation Number is null or empty.");
            }
            if (reservationInfo == null)
            {
                throw new ArgumentNullException("Null Reservation info.");
            }
            if (reservationInfo.Car == null)
            {
                throw new ArgumentNullException("No car registered to rent.");
            }
            if (reservationInfo.RentalDatetime == DateTime.MinValue || reservationInfo.RentalDatetime == DateTime.MaxValue)
            {
                throw new ArgumentException("Rental Date has an unreal value.");
            }
            if (reservationInfo.Car.Mileage <0)
            {
                throw new ArgumentOutOfRangeException("Mileage can't be less than 0.");
            }

            reserverationsRegister.ReservationsDone.Add(reservationInfo);
        }
        catch (Exception) 
        {
            throw;
        }

    }

How to make programmer check certain method after adding column to the DB table

I have a method called export that depends heavily on the DB table schema. And by “depends heavily” I mean I know that adding the new column to certain table often (very often) leads to the export method change (usually you should add the new field to the export data as well).

My goal is to make programmer explicitly say whether he forgot to look at the export method or just don't want to add field to the export data.

I have two ideas, but both of them have flaws.

Smart "Read all" wrapper

I can create the smart wrapper that makes sure all data is explicitly read.

Something like this:

def export():
    checker = AllReadChecker.new(table_row)

    name = checker.get('name')
    surname = checker.get('surname')
    checker.i_dont_need('age') # explicitly ignore the "age" field

    result = [name, surname] # or whatever

    checker.i_am_done() # check all is read

    return result

So, checker asserts if table_row contains another fields that were not read. But all this thing look kind of heavy and (maybe) affects perfomance.

“Check that method” unittest

I can just create the unittest that remembers the last table condition and fails every time the table is changes. In that case programmer would see something like “don't forget to check out the export method”. To stop that warning to appear programmer would (or wouldn't — that's a problem) check out export and manually (that's another problem) fix the test by adding new fields into it.

P. S.

The above problem is just the example of the more wide class of problems I encounter from time to time. I want to bind some pieces of code and/or infrastructure so changing one of them immediately alerts programmer to check another one.

Unit Test Windows.Web.Http HttpClient with mocked IHttpFilter and IHttpContent, MockedHttpFilter throws System.InvalidCastException

I have a class that depends on the HttpClient from Windows.Web.Http (Windows 10 UAP App). I want to unit test and therefore I need to "mock" the HttpClient to setup what a Get-Call should return. I started with a "simple" unit test with a HttpClient using a handwritten-mocked IHttpFilter and IHttpContent. It's not working as expected and I get a InvalidCastException in the Test-Explorer.

The unit test looks like:

    [TestMethod]
    public async Task TestMockedHttpFilter()
    {
        MockedHttpContent mockedContent = new MockedHttpContent("Content from MockedHttpContent");
        MockedHttpFilter mockedHttpFilter = new MockedHttpFilter(HttpStatusCode.Ok, mockedContent);

        HttpClient httpClient = new HttpClient(mockedHttpFilter);
        var resultContentTask = await httpClient.SendRequestAsync(new HttpRequestMessage(HttpMethod.Get, new Uri("http://dontcare.ch"))).AsTask().ConfigureAwait(false);
        // Test stops here, throwing System.InvalidCastException: Specified cast is not valid

        // Code not reached...
        var result = await resultContentTask.Content.ReadAsStringAsync();
        Assert.AreEqual("Content from MockedHttpContent", result);
    }

I implemented IHttpFilter in MockedHttpFilter:

public class MockedHttpFilter : IHttpFilter
{
    private HttpStatusCode _statusCode;
    private IHttpContent _content;

    public MockedHttpFilter(HttpStatusCode statusCode, IHttpContent content)
    {
        _statusCode = statusCode;
        _content = content;
    }

    public IAsyncOperationWithProgress<HttpResponseMessage, HttpProgress> SendRequestAsync(HttpRequestMessage request)
    {
        return AsyncInfo.Run<HttpResponseMessage, HttpProgress>((token, progress) =>
        Task.Run<HttpResponseMessage>(()=>
        {
            HttpResponseMessage response = new HttpResponseMessage(_statusCode);
            response.Content = _content;
            return response; // Exception thrown after return, but not catched by code/debugger...
        }));
    }
}

I implemented IHttpContent in MockedHttpContent:

public class MockedHttpContent : IHttpContent
{
    private string _contentToReturn;

    public MockedHttpContent(string contentToReturn)
    {
        _contentToReturn = contentToReturn;
    }

    public HttpContentHeaderCollection Headers
    {
        get
        {
            return new HttpContentHeaderCollection();
        }
    }

    public IAsyncOperationWithProgress<string, ulong> ReadAsStringAsync()
    {
        return AsyncInfo.Run<string, ulong>((token, progress) => Task.Run<string>(() =>
        {
            return _contentToReturn;
        }));
    }
}

The error in the Test-Explorer result view:

Test Name:  TestMockedHttpFilter
Test FullName:  xxx.UnitTests.xxxHttpClientUnitTests.TestMockedHttpFilter
Test Source:    xxx.UnitTests\xxxHttpClientUnitTests.cs : line 22
Test Outcome:   Failed
Test Duration:  0:00:00.1990313

Result StackTrace:  
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
   at xxx.UnitTests.xxxHttpClientUnitTests.<TestMockedHttpFilter>d__1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
Result Message: Test method xxx.UnitTests.xxxHttpClientUnitTests.TestMockedHttpFilter threw exception: 
System.InvalidCastException: Specified cast is not valid.

First, not sure why the exception is thrown / what I'm doing wrong. Maybe someone can point me in the right direction or give a hint what to check / test next?

Second, is there a better way to unit test code with a HttpClient dependency (Windows 10 UAP)?

Junit tests for JPA Repository methods

I have the following test method:

@Injected
private ProductRepository productRepository;
    @Test
    @Transactional
    public void shoudChangeProductTilte(){
    Product product = productRepository.getProductById(123L);
            product.setTitle("test_product");
            productRepository.save(product);
            Product updatedProduct = productRepository.getProductById(123L);
    assertNotNull(updatedProduct);
            assertEquals("test_product", updatedProduct.getTitle());
    }

My question is: Do you see any sence in this test? Is it looking good?

All method will be executed it one transaction. I am not sure that we will write anything directly to db. Also, I do not like idea with product and updatedProduct ... hmm, I feel that something wrong, but is it really true?

What do you think?

Test - How to check if something was added to the HashMap

I have a method that simply just add something to the hashmap. Code looks like this:

public void map(@Nonnull String string, @Nonnull Collection<SomeCOllection> collection) {
    hashmap.put(string, collecction);
}

hashmap is initialised on top off the class. I want to test if this works. I just simply do not know how to check if that list contains expected elements. It would be simple to check if that method would return something.

For now my test looks like this:

@Test
public void testMap() {
    //given
    Collection<SomeCollection> collection = setUpElementsOfCollection();
    String group = "a";
    String group2 = "b";

    //when
    parser.map(group,users);

    //then
    verify(parser).map(group, users);
}   

I would like to make some sort of assertion to check if map cworks correctly. Any ideas, code snippets would be nice.

Testing angular js directive's inline controller method using Jasmine?

I facing difficulty in covering the inline controller function defined in a directive using Jasmine and Karma. Below is the code of that directive.

myApp.directive("list", function() {
    return {
        restrict: "E",
        scope: {
            items: "=",
            click: "&onClick"
        },
        templateUrl: "directives/list.html",
        controller: function($scope, $routeParams) {
            $scope.selectItem = function(item) {
                $scope.click({
                    item: item
                });
            }
        }
    }     
});

Unit Test case is below:

describe('Directive:List', function () {

    var element, $scope, template, controller;
    beforeEach(inject(function ($rootScope, $injector, $compile, $templateCache, $routeParams) {
        $httpBackend = $injector.get('$httpBackend');
        $httpBackend.when("GET", "directives/list.html").respond({});
        $scope = $rootScope.$new();
        element = $compile('<list></list>')($scope);
        template = $templateCache.get('directives/list.html');
        $scope.$digest();
        controller = element.controller($scope, $routeParams);
    }));

    it('Test Case-1: should contain the template', function () {
        expect(element.html()).toMatch(template);
    });
}); 

In my code coverage report, the controller function is not covered. Also I am not able to get the controller instance and test the selectItem method.

Any idea would be of great help!

Specflow - Is there a way to manage a background so that it only runs for certain scenarios in a feature?

I have a Specflow .feature file containing a number of scenarios.

The majority of the scenarios within the feature file use a background. However, one scenario does not require this background.

How can I stop the background from running for this specific scenario without having to move it to a separate feature?

Mocking SessionContex using mockito causes a ClassNotFoundException

I am starting with unit tests. I've made a change to a class in which I not inject the SessionContext so I can make a lookup when needed.

@Resource
private SessionContext ctx;

Now, in my test, I would like to inject it so I can mock the lookup method:

@Mock
private SessionContext ctx;

But when I run the test, I get:

java.lang.NoClassDefFoundError: javax/xml/rpc/handler/MessageContext at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2615) at java.lang.Class.getDeclaredMethods(Class.java:1860) at org.mockito.cglib.core.ReflectUtils.addAllMethods(ReflectUtils.java:349) at org.mockito.cglib.proxy.Enhancer.getMethods(Enhancer.java:427) at org.mockito.cglib.proxy.Enhancer.generateClass(Enhancer.java:457) at org.mockito.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25) at org.mockito.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:217) at org.mockito.cglib.proxy.Enhancer.createHelper(Enhancer.java:378) at org.mockito.cglib.proxy.Enhancer.createClass(Enhancer.java:318) at org.mockito.internal.creation.jmock.ClassImposterizer.createProxyClass(ClassImposterizer.java:110) at org.mockito.internal.creation.jmock.ClassImposterizer.imposterise(ClassImposterizer.java:62) at org.powermock.api.mockito.internal.mockcreation.MockCreator.createMethodInvocationControl(MockCreator.java:111) at org.powermock.api.mockito.internal.mockcreation.MockCreator.mock(MockCreator.java:60) at org.powermock.api.mockito.PowerMockito.mock(PowerMockito.java:143) at org.powermock.api.extension.listener.AnnotationEnabler.standardInject(AnnotationEnabler.java:84) at org.powermock.api.extension.listener.AnnotationEnabler.beforeTestMethod(AnnotationEnabler.java:51) at org.powermock.tests.utils.impl.PowerMockTestNotifierImpl.notifyBeforeTestMethod(PowerMockTestNotifierImpl.java:90) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:292) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282) at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:86) at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:49) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120) at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:33) at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:45) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:118) at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:104) at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53) at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:53) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192) Caused by: java.lang.ClassNotFoundException: javax.xml.rpc.handler.MessageContext at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 39 more

I find it strange, because I have all the dependencies required (this code works in the real application).

How do I mock and inject the SessionContext using mockito? (I can't change the mocking framework).

How can I test the below websocket server method when there are internal dependencies?

I have a web socket server code. How can I test these using JUnit? Below would be the implementation of the onOpen method.How can I test this method to see if each and every line got executed correctly? I can create a client and see if the connection was successful. But how can I check if the lines inside the below method code got executed as intended? Also there is an internal dependency on the object messageBroker.Without removing this line from this method, how can I test it? I am new to unit testing. Please advice.

if (ws.isOpen()) {
        System.out.println("WebSocketServer->OnOpen : OnOpen method called by client");
        MessageBroker = new MessageBroker(messageBus);
        MessageBroker.SetWebClient(ws);
        sessionIdMap.put(ws,MessageBroker);

    }

E2E testing: Karma+JQuery vs Protractor

Question

In terms of e2e testing, what can't we do with Karma and JQuery that is possible with Protractor?

Explanation

I'm currently building a testing framework for my JavaScript application. I'm using Karma for Unit Testing and Protractor for E2E Testing as suggested by many people.

I'm aware of the conceptual differences between unit testing and e2e testing, but, in the context of JavaScript, I don't clearly understand why we need framework like Protractor.

As far as I know, the point of e2e testing is to use the application as a simple end-user. For that, tools like Protractor use a webdriver to interact with a browser and let us simulate some user events (clicking on element, filling forms...).

The thing is, why can't we simply do this by using Karma and JQuery?

Indeed, JQuery comes with lots of methods to interact with a DOM element (trigger event, get/set element's property, set value to an input...). Furthermore, it provides selectors which make the selection of DOM element very easy.

From my point of view, Karma and JQuery has (almost) everything needed for e2e testing (if we don't take care of the browser's window's operations available in Protractor which enable for example to set the window size or location).

I'm obviously missing something, any clarification will be helpful.

How to perform Form Authentication testing in C# MVC 5.0

I have tried a lot and searched about. I got very few links about that but i am not succeeded in solving this problem.

With out generation any dummy form-authentication , In Unit Testing(c#-mvc) project I want to perform form-authentication.

I have used interface methods like below

    private readonly IAuthenticationProvider _authenticationProvider

    public AccountController(IAuthenticationProvider authenticationProvider)
    {
        _authenticationProvider = authenticationProvider;
    }

    public interface IAuthenticationProvider
    {
        void SignOut();
        void SetAuthCookie(string username, bool remember);
        void RedirectFromLoginPage(string username, bool createPersistentCookie);
    }


    public class FormsAuthWrapper : IAuthenticationProvider
    {
        public void SignOut()
        {
            FormsAuthentication.SignOut();
        }
        public void SetAuthCookie(string username, bool remember)
        {
            FormsAuthentication.SetAuthCookie(username, remember);
        }
        public void RedirectFromLoginPage(string username, bool createPersistentCookie)
        {
            FormsAuthentication.RedirectFromLoginPage(username, createPersistentCookie);
        }
    }

For login

   public ActionResult Login(LoginViewModel model, string returnurl)
   {
      _authenticationProvider.SetAuthCookie(model.username, false)
   }

Is any other way to initialize form-authentication object. I am still getting error : Object reference not set to an instance of an object.

Because _authenticationProvider is null and form-authentication is not initialized

It says Use new keyword to create an object Instance

StackTrace is following..

   at System.Web.Security.CookielessHelperClass.UseCookieless(HttpContext context, Boolean doRedirect, HttpCookieMode cookieMode)
   at System.Web.Security.FormsAuthentication.SetAuthCookie(String userName, Boolean createPersistentCookie, String strCookiePath)
   at System.Web.Security.FormsAuthentication.SetAuthCookie(String userName, Boolean createPersistentCookie)
   at Test.Controllers.AccountController.Login(LoginViewModel model, String returnurl) in d:\projects\Test\Test\Controllers\AccountController.cs:line 81
   at Test.Tests.Controllers.CIControllerTest.TestSetup() in d:\projects\Test\Test.Tests\Controllers\CIControllerTest.       cs:line 47

Getting Unknown provider in my unit test

I get the following error when trying to unit test my angular service.

Error: [$injector:unpr] Unknown provider: dbTranslateProvider.

I use Karma and Jasmine for testing and my code is packed with and WebPack. I'm new to Angular and Angular unit testing.

My unit test look like this (i'm not really testing anything yet):

describe("dbTranslate", function () {

    beforeEach(function () {
        var module = angular.module('ebcore');
    });

    beforeEach(inject(function ($injector) {

        var sut = $injector.get('dbTranslate');
        var test = $injector.get('$q');
    }));

    it("dbTranslate.translate", function () {
        $cacheFactory = function () { };
        $cacheFactory.get = function () { };
        //$dbTranslateCache = dbTranslate.dbTranslateCache($cacheFactory);
    });
});

The service i'm trying to test looks like this (typescript):

export class dbTranslate {
    static $inject = ['$q', '$http', 'dbTranslateCache'];

    constructor(private $q: ng.IQService, private $http: ng.IHttpService, private dbTranslateCache: ng.ICacheObject) {

    }

    ensure(texts: string[]) {
        var deferred = this.$q.defer();
        var missing = [];
        var values = {};

        texts.forEach(text => {
            var value = this.dbTranslateCache.get(text);
            if (typeof value === 'undefined') {
                missing.push(text);
            } else {
                values[text] = value;
            }
        });

        if (missing.length == 0) {
            deferred.resolve(values);
        } else {
            this.$http.post("/pub/texts/fetchText", missing).then(result => {
                for (var prop in result.data) {
                    if (result.data.hasOwnProperty(prop)) {
                        this.dbTranslateCache.put(prop, result.data[prop]);
                        values[prop] = result.data[prop];
                    }
                }
                deferred.resolve(values);
            }, error => {
                deferred.reject();
            });
        }

        return deferred.promise;
    }

    translate(text: string): string {
        return this.dbTranslateCache.get(text);
    }
}

If I try and debug my unit test I can see that the module._invokeQueue contains my service.

So any idea why can't I find my service through $injector? what am are missing?

How to create jUnit test cases for a method using DB

I never did jUnit test cases. I looked for how to do but I've just done basics test cases with assertEquals(). I do not know how to do for this method :

public class Apc7Engine extends BaseEngine {

/**
 * This method retrieve plannings 
 * in APC7 configuration
 * 
 * It is an implementation of an abstract method
 * from BaseEngine.java
 *
 */
@Override
public void retrievePlannings() {
    LogCvaultImport.code(200).debug("A7: start retrievePlannings");
    try {
        List importList = DummyApc7DAOFactory.getDAO().getDummyApc7();
        Iterator poIterator = importList.iterator();

        while(poIterator.hasNext()) {
             DummyApc7 dummy = (DummyApc7) poIterator.next();
             PlanningObject planning = new PlanningObject();
             planning.setAchievedDate(dummy.getLastUpdate());
             planning.setAircraftType(dummy.getAcType());
             planning.setBaselineDate(dummy.getLastUpdate());
             planning.setDeliverySite(dummy.getDeliverySite());
             planning.setEventId(dummy.getEvtId());
             planning.setEventName(dummy.getEvent());
             planning.setEventStatus(dummy.getEvtStatus());
             planning.setLastUpdate(dummy.getLastUpdate());
             planning.setModel(dummy.getModel());
             planning.setMsn(dummy.getMsn());
             planning.setOperator(dummy.getOperator());
             planning.setOwner(dummy.getOwner());
             planning.setProgram(dummy.getProg());
             planning.setSerial(dummy.getSerial());
             planning.setTargetDate(dummy.getLastUpdate());
             planning.setVersion(dummy.getVersion());
             planning.setVersionRank(dummy.getVersionRank());
             LogCvaultImport.code(800).info("A7|Event name: "+planning.getEventName()+" - MSN: "+planning.getMsn()+" - Delivery site: "+planning.getDeliverySite());
             listPlanningObject.add(planning);        
        }
    } catch (DAOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

    LogCvaultImport.code(1000).debug("A7: end retrievePlannings");
}

}

I retrieve an object from the DB. Then I fill a List from the PlanningObject class with the DB data. I do not have any idea how to realize jUnit test cases about it. I heard about mock?

Thanks guys !