mercredi 31 décembre 2014

Why this unit test causes 'JSON.stringify cannot serialize cyclic structures' exception?

I want to use a unit test to check if an element is visible after a certain action (_toggleWidget function in this case).



it 'should show the bar after toggle', ->
view = new MyView
view._toggleWidget

expect(view.$navBar).to.be.visible


The test failed with this error message: TypeError: JSON.stringify cannot serialize cyclic structures.


Here is MyView classs



class HeaderView extends Backbone.View

initialize: ->
@$el.html template()
@$navBar = @$ '.nav-bar'

Jmockit: Expectations() works in v12 but not in v13/v14 (Java SE 8, TestNG 6.8.13)

With JMockit v12, this test passes (not the real code, but illustrates the issue):



import mockit.Expectations;
import mockit.Mocked;
import org.testng.Assert;
import org.testng.annotations.Test;

public class JmockitExperimentsTest2
{
public class MyClass
{
public int getValue()
{
return 5;
}
}
@Mocked
MyClass myClass;

@Test ()
public void jmockitTest()
{

new Expectations()
{
{
myClass.getValue();
returns(8);
myClass.getValue();
returns(4);
}
};
Assert.assertEquals(myClass.getValue(), 8);
Assert.assertEquals(myClass.getValue(), 4);
}
}


With JMockit v13 (and v14) it gets this assertion failure:



java.lang.AssertionError: expected [8] but found [4]


I get the same assertion failure in v13/v14 if I use "NonStrictExpectations" in place of "Expectations". However, if I change "Expectations" to "StrictExpectations" in v13/v14, there is no assertion failure.


I see from the JMockit change log that changes were made to the Expectations in v13 so presumably, I don't understand the description in the change log about what to expect. But it seems this change is not backward compatible.


It's very confusing to me why "Strict" works and "NonStrict" doesn't -- I'd expect anytime "Strict" succeeds, "NonStrict" would also succeed.


What am I doing wrong?


Gradle test case

I created a test unit that will simply print a String, that String value is "/UFFFD/UFFD0"(some non ascii characters). when I execute my test case using IntelliJ the result is the unicode characters but when I do it via gradle I got "??", investigating deeper it's not just a display issue in fact the byte code of the two character changed to be the ? ascii character. Any thoughts ? PS: adding



compileJava {options.encoding = "UTF-8"}
compileTestJava {options.encoding = "UTF-8"}

Django testing ajax endpoint in view

I'm using django-ajax in a django application, and want to do more thorough unit testing of the view that uses it.


My template for a particular view contains the following:



{% block head_js %}
<script type="text/javascript">
$(function() {
$('#progressbar').progressbar({
value: false
});

var checkStatus = function() {
$.ajax({
type: "POST",
url: '/ajax/MyApp/check_provisioning.json',
dataType: 'json',
success: function(data) {
if (data.data.complete != true) {
setTimeout(checkStatus, 3000);
} else {
// We've finished provisioning, time to move along.
window.location.replace('/MyApp/next');
}
}
});
};

checkStatus();
});
</script>
{% endblock %}


In MyApp/endpoints.py I have the function (simplified):



def check_provisioning(request):

# Do some stuff

return {'complete': some_boolean}


Now... As far as I can tell, the view functions just fine, in actual usage. But when making unit tests, django's test client retrieves the rendered response, but doesn't run anything embedded therein.


Does anyone know a way I can unit test that the view and/or the endpoint function are actually doing what they're supposed to? I would rather not fall back on using the django test framework to set up for a selenium test for the one view in the whole project that uses django-ajax this way.


Passing discriminated unions to InlineData attributes

I am trying to unit test a parser that parses a string and returns the corresponding abstract syntax tree (represented as a discriminated union). I figured it would be pretty compact to use Xunit.Extensions' attribute InlineData to stack all test cases on one another:



[<Theory>]
[<InlineData("1 +1 ", Binary(Literal(Number(1.0)), Add, Literal(Number(1.0))))>]
...
let ``parsed string matches the expected result`` () =


However, compiler complains that the second argument is not a literal (compile time constant if I understand it correctly).


Is there a workaround for this? If not, what would be the most sensible way to structure parser result tests while keeping every case as a separate unit test?


With mock testing, are unit tests + system tests enough?

Before you jump to an answer, let's define what I mean (note that you may have different definitions, and that's part of the problem, but this is what I'm using)



mock testing aka behavior-based testing--- tests the code does the right thing i.e., tests verify behavior. All collaborators are mocked.


unit tests --- low-level tests focusing on a small part of the system (like a class). When we use mock testing, collaborators are mocked.


integration tests --- test the interaction of two or more parts of the system (like two classes). The components under test are not mocked.


system tests --- test the system as a "black box" i.e., from the perspective of a user who does not have access to the internals of the system. Real components are used (database, http, etc)



What I'm slowly realizing is that when unit tests are done this way, you may not need integration tests.



  • The behavior-based unit tests should verify that components talk to each other correctly

  • system tests should catch bugs from using real components


Integration tests then become an optional troubleshooting tool when a system test fails (since they're more fine-grained). (However, you might argue that system tests with good logging are enough except for the occasional edge case.)


What am I missing?


Accessing Rails fixtures from test cases

Background


I have an application that pulls in data from various APIs into the local DB and then displays that data to the user. The API updates are scheduled via a rake task that run every X hours.


My testing involves two parts



  1. Units tests around pulling the data from the various APIs into the local DB

  2. Controller tests testing that the data in the DB is correctly picked up and manipulated for display by the Controller


For #1 I don't want any fixtures in the DB since the whole point is testing insertion into the DB in the first place. For #2 I do want to use fixtures.




The Issue


I don't want fixtures to be loaded for all tests, just certain sets of tests. By default, rails has each test require 'test_helper' which in turns (re-)creates fixtures by calling fixtures :all


To get around that, I created a custom method in test_helper.rb that creates fixtures that I can call when I want.



# test_helper.rb

ENV['RAILS_ENV'] ||= 'test'
require File.expand_path('../../config/environment', __FILE__)
require 'rails/test_help'

class ActiveSupport::TestCase
# Added by Rails. Comment this out and replace with custom call below
# fixtures :all

def load_my_fixtures
# These are eventually called from each individual test that extends
# this class (ActiveSupport::TestCase) and inherits this method,
# so need to refer to methods and accessors explicitly
ActiveSupport::TestCase.fixtures :all

# Do other custom stuff
# ....

# Test output
# "one" is just a simple fixture I have I set up in employees.yml
puts employees(:one)
end
end


My employee_test.rb test case is straightforward



require 'test_helper'

class EmployeeTest < ActiveSupport::TestCase
test "the truth" do
load_my_fixtures
assert true
end
end


As per the rails documentation, each fixture is dumped into a local variable for easy access. However the call to employee(:one) above fails with



NoMethodError: undefined method `[]' for nil:NilClass




Questions



  1. The above error goes away if I uncomment the original fixtures :all at the top which loads all fixtures. Why is that?

  2. Is there an easy way to get a list of all fixtures (e.g. :one, :two, :three, etc..)? I don't want to hard-code the retrieval of all Employee fixtures

  3. Is there an easy way to manually clear the test DB? I know that while (re-)loading fixtures ActiveRecord already wipes the DB, but wondering how I can do it manually if needed.


Thanks for the help!


UI Router Extras breaks my unit tests with unexpected results error?

QUESTION:


- Why are my tests failing when ui-router-extras (not normal ui-router) is install?


- How can I use ui-router-extras and still have my tests pass?




If you want to install this quickly use yeoman + angular-fullstack-generator + bower install ui-router-extras


I found a similar issue with normal ui-router.



  • Luckially, ui-router normal works just fine with my testing.

  • After installing ui-router-extras I get an ERROR


Error log


If I uninstall ui-router.extras it this test passes just fine: enter image description here


Heres my test:



'use strict';
describe('Controller: MainCtrl', function () {

// load the controller's module
beforeEach(module('morningharwoodApp'));
beforeEach(module('socketMock'));

var MainCtrl,
scope,
$httpBackend;

// Initialize the controller and a mock scope
beforeEach(inject(function (_$httpBackend_, $controller, $rootScope) {
$httpBackend = _$httpBackend_;
$httpBackend.expectGET('/api/things')
.respond(['HTML5 Boilerplate', 'AngularJS', 'Karma', 'Express']);

scope = $rootScope.$new();
MainCtrl = $controller('MainCtrl', {
$scope: scope
});
}));

it('should attach a list of things to the scope', function () {
$httpBackend.flush();
expect(scope.someThings.length).toBe(4);
});
});


Here's my karma.conf



module.exports = function(config) {
config.set({
// base path, that will be used to resolve files and exclude
basePath: '',

// testing framework to use (jasmine/mocha/qunit/...)
frameworks: ['jasmine'],

// list of files / patterns to load in the browser
files: [
'client/bower_components/jquery/dist/jquery.js',
'client/bower_components/angular/angular.js',
'client/bower_components/angular-mocks/angular-mocks.js',
'client/bower_components/angular-resource/angular-resource.js',
'client/bower_components/angular-cookies/angular-cookies.js',
'client/bower_components/angular-sanitize/angular-sanitize.js',
'client/bower_components/lodash/dist/lodash.compat.js',
'client/bower_components/angular-socket-io/socket.js',
'client/bower_components/angular-ui-router/release/angular-ui-router.js',
'client/bower_components/famous-polyfills/classList.js',
'client/bower_components/famous-polyfills/functionPrototypeBind.js',
'client/bower_components/famous-polyfills/requestAnimationFrame.js',
'client/bower_components/famous/dist/famous-global.js',
'client/bower_components/famous-angular/dist/famous-angular.js',
'client/app/app.js',
'client/app/app.coffee',
'client/app/**/*.js',
'client/app/**/*.coffee',
'client/components/**/*.js',
'client/components/**/*.coffee',
'client/app/**/*.jade',
'client/components/**/*.jade',
'client/app/**/*.html',
'client/components/**/*.html'
],

preprocessors: {
'**/*.jade': 'ng-jade2js',
'**/*.html': 'html2js',
'**/*.coffee': 'coffee',
},

ngHtml2JsPreprocessor: {
stripPrefix: 'client/'
},

ngJade2JsPreprocessor: {
stripPrefix: 'client/'
},

// list of files / patterns to exclude
exclude: [],

// web server port
port: 8080,

// level of logging
// possible values: LOG_DISABLE || LOG_ERROR || LOG_WARN || LOG_INFO || LOG_DEBUG
logLevel: config.LOG_INFO,


// enable / disable watching file and executing tests whenever any file changes
autoWatch: false,


// Start these browsers, currently available:
// - Chrome
// - ChromeCanary
// - Firefox
// - Opera
// - Safari (only Mac)
// - PhantomJS
// - IE (only Windows)
browsers: ['PhantomJS'],


// Continuous Integration mode
// if true, it capture browsers, run tests and exit
singleRun: false
});
};

EF 6 Database hit Count

I'm trying to count how many database hits have been executed to run an function.


Background: I'm trying to run unit tests on a Web API application to protect against dumb coding errors (forgetting to add an include), and currently the unit tests are running against a full database, so I want to test how many hits that DB is taking per test.


An idea I had was to query Glimpse but I can't figure out how to instantiate it for unit testing.


C++ class unittests (Qt and not only)

I am pretty new to C++ unit testing and I am in the following situation: I have a class which is (of course) has public and private members. I want to test a method from private/protected section. For tests I use QtTest package but it is not strict requirement. Which is the best way of test such method?


My suggestions are:



1. Subclass it with `Test class` (multiple inheritance)
2. Add friend class (not the best way in my opinion)

Using RhinoMocks stub to return a value then an exception

I have the following scenario.


I'm using RhinoMocks to mock one of my services. The initial action of a stub is to increment a call count, on subsequent call i would like to throw an exception ... how would i do that?


This is what i currently have and i'm setting this up in the TestFixtureSetup method



var mockBLL = MockRepository.GenerateMock<IBLL>();

mockBLL.Stub(x => x.SaveOrUpdateDTO(null, null)).IgnoreArguments().WhenCalled
(invocation =>
{
nSaveOrUpdateCount++;
});

SimpleIoc.Default.Register<IBLL>(() => mockBLL);


In my test cases, one of my object will read from the IoC and then perform a call to the "SaveOrUpdateDTO" method. The first test case checks the count which is correct, the second test case will try to catch an exception.


My initial though is to create another mock, then re-register it before the second test case, but i don't think that the best way to go about it.


Any thoughts on how to generate two different stubs, one calling an action and another throwing an exception?


Is Moq mocking a subinterface return value and ignoring the intermediary step a bug or a feature?

I was recently building an app and a coworker wrote a setup I swore would fail. I was wrong. In it a factory method was set up with an expected value of true and would return an integer. Because we didn't mock our configuration, the bool would always be false.


The setup was:



var homeStoreDataServiceFactory = new Mock<IHomeStoreDataServiceFactory>();
homeStoreDataServiceFactory.Setup(service => service.Create(true).GetStoreNumber())
.Returns(5);


I thought that a call to factory.Create(false) would not generate the mock object, and thus we would get 0 for the integer instead of the mocked value 5. Instead, no matter what we changed the service.Create(X) to, calls to GetStoreNumber always return 5 as if we'd used It.IsAny().


I've built up an MVCE so that you can see what I'm confused about:



using System;
using Moq;

namespace MoqBugMCV
{
public interface IItemServiceFactory
{
IItemService Create(bool shouldCreateServiceA);
}

public class Item
{
public string Name { get; set; }
public decimal Price { get; set; }
}

public interface IItemService
{
Item GetItem();
}

public class ItemManager
{
private readonly IItemService _itemService;

public ItemManager(IItemServiceFactory itemServiceFactory)
{
_itemService = itemServiceFactory.Create(true); //<==== configured true (like by app.config at runtime or something)
}

public Item GetAnItem()
{
return _itemService.GetItem();
}
}

internal class Program
{

private static void Main(string[] args)
{
var itemServiceFactory = new Mock<IItemServiceFactory>();
var chrisItem = new Item {Name = "Chris's Amazing Item", Price = 1000000};
itemServiceFactory.Setup(factory => factory.Create(true).GetItem())
.Returns(chrisItem);

var itemManager = new ItemManager(itemServiceFactory.Object);

var theItem = itemManager.GetAnItem();

Console.WriteLine("The item is {0} and costs {1}", theItem.Name, theItem.Price);

var itemServiceFactoryBroken = new Mock<IItemServiceFactory>();
itemServiceFactoryBroken.Setup(factory => factory.Create(false).GetItem()).Returns(chrisItem); //expecting this to fail, because IItemServiceFactory.Create(true) is configured

itemManager = new ItemManager(itemServiceFactoryBroken.Object);
theItem = itemManager.GetAnItem();

Console.WriteLine("The item is {0} and costs {1}", theItem.Name, theItem.Price); //would expect the item would be null or values to be blank
}
}
}


So... is this a bug or feature, or am I not understanding something about Moq?


Unit testing for signal processing?

I came across similar problems like this one: Test driven development for signal processing libraries


The fact behind the problem is that the output of signal processing is hard to be fully and qualitatively defined.


So the inject input > run program > verify output approach does not apply well to signal processing.


Does meeting the performance requirement in the specifications mean that there's no bugs? Of course not. Then if it meets the requirement, why bother? Because bugs will bite, someday, they will.


In the end, the only feasible solution is to compare the output with a known-good equivalent, usually a matlab version or some widely used libraries.


Matlab has a good collection of libraries, and it has boundary checking, memory management, and it's double precision, so comparing with matlab code exposes pointer-out-of-bound, stack-overflow, and evaluates integer precision sufficiency, but it does not answer the question like "what if the matlab equivalent did it wrong, too?"


I can only say to myself: try to write the matlab equivalent simple, so it's close to "so simple that there's obviously no bug"


In my team, I have programmers at a variety of skill levels, and I need at least some kind of measure to control/enforce the quality of the code.


It's been more than two years since the last post. I with there are some new development in this area.


Please share with me, as practitioners, your ideas and opinions.


Robolectric unit test for fragment fails intermitently with ClassNotFoundException exception

I have a build on TeamCity that runs unit tests via Gradle. Intermitentlly, tests that involve fragments or activities fail with a ClassNotFoundException for classes like android.support.v4.app.FragmentTransitionCompat21$ViewRetriever or android.support.v4.app.ActivityCompat21$SharedElementCallback21. The tests fail when starting the fragment, and I've tried all of the methods to start a fragment from this question - How can I test fragments with Robolectric?.


Here's an example for a test:



@Test
public void ContactSupportFragment_CallBtnClicked_CallWasMade() throws Exception
{
ContactSupportFragment fragment = new ContactSupportFragment();
startFragment(fragment);

LinearLayout btnCall = (LinearLayout) fragment.getView().findViewById(R.id.contact_support_call_btn);
btnCall.performClick();

Mockito.verify(techSupportCall, Mockito.times(1)).call(Mockito.any(Context.class),
Mockito.eq(Robolectric.application.getString(R.string.tech_support_phone_number)));
}


Here's an example for a stack trace:



java.lang.NoClassDefFoundError: android/support/v4/app/FragmentTransitionCompat21$ViewRetriever
at android.support.v4.app.FragmentManagerImpl.beginTransaction(FragmentManager.java:481)
at org.robolectric.util.FragmentTestUtil.startFragment(FragmentTestUtil.java:25)
at com.asurion.solutohome.callsupport.ContactSupportFragmentTest.ContactSupportFragment_CallBtnClicked_CallWasMade(ContactSupportFragmentTest.java:70)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:236)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:158)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: android.support.v4.app.FragmentTransitionCompat21$ViewRetriever
at org.robolectric.bytecode.AsmInstrumentingClassLoader.loadClass(AsmInstrumentingClassLoader.java:88)
at android.support.v4.app.FragmentManagerImpl.$$robo$$FragmentManagerImpl_917e_beginTransaction(FragmentManager.java:481)
at android.support.v4.app.FragmentManagerImpl.beginTransaction(FragmentManager.java)
at org.robolectric.util.FragmentTestUtil.startFragment(FragmentTestUtil.java:25)
at com.asurion.solutohome.callsupport.ContactSupportFragmentTest.ContactSupportFragment_CallBtnClicked_CallWasMade(ContactSupportFragmentTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:236)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:158)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)


It hasn't happened locally yet, only on the TeamCity agent, however I haven't found anything different between my machine and the agent (same SDK, same Gradle build etc.). What can cause these exceptions?


How to use Gecko in nodeJs

I am writing tests for some WebRTC code.

Unfortunately WebRTC doesn't work in PhantomJs.


After some research I found SlimerJs and CasperJs.


Is there any working node bridge for SlimerJs or CasperJs similar to PhantomJs-Node which allows evaluating client side javascript on the gecko engine?


Unit Test - New Operator - Redesign and Inject

The following class is very hard to unit test:



public class UserService
{
public void Update(User user)
{
UserDAO userDAO = new UserDAO();
userDAO.update(user);
}
}


In chapter 7.6.2 of the book Practical Unit Testing with JUnit and Mockito, Tomek Kaczanowsk suggest we should inject as following:



public class UserService
{
private UserDAO _userDAO;

public UserService(UserDAO userDao)
{
_userDAO = userDAO;
}

public void Update(User user)
{
_userDAO.update(user);
}
}


However, how can we use UserService without creating UserDao?



public class UserController
{
public UserController()
{
// How do we initialize UserService without understanding about UserDAO here
}

public ActionResult Update()
{
var user = new User();
_userService.Update(user);
}
}

Testing unit tests' helper methods

As I am writing tests, some of them have a lot of logic in them. Most of this logic could easily be unit tested, which would provide a higher level of trust in the tests.


I can see a way to do this, which would be to create a class TestHelpers, to put in /classes, and write tests for TestHelpers along with the regular tests.


I could not find any opinion on such a practice on the web, probably because the keywords to the problem are tricky ("tests for tests").


I am wondering whether this sounds like good practice, whether people have already done this, whether there is any advice on that, whether this points to bad design, or something of the sort.


I am running into this while doing characterization tests. I know there are some frameworks for it, but I am writing it on my own, because it's not that complicated, and it gives me more clarity. Also, I can imagine that one can easily run into the same issue with unit tests.


To give an example, at some point I am testing a function that connects to Twitter's API service and retrieves some data. In order to test that the data is correct, I need to test whether it's a json encoded string, whether the structure matches twitter's data structure, whether each value has the correct type, etc. The function that does all these checks with the retrieved data would typically be interesting to test on its own.


Any idea or opinion on this practice ?


PHPUnit test classes with camel case or underscore

When writing test cases inthe xUnit style that PHPUnit follows it seems that everyone follows the camel case convention for function names:



public function testObjectHasFluentInterface()
{
// ...
}


I have been naming my methods with a more "eloquent" PHPSpec style:



public function test_it_has_a_fluent_interface()
{
// ...
}


Will this style create problems for me in the future? Personally I find it vastly more readable and easy to come back to.


N Unit Alphabetical Order Assertion

I have a search box on a website that returns search results, based on keyword (storing these as a list in c#)


There are filter options which I need to test, one of which is product name A-Z.


When this is selected, the search results should be sorted accordingly.


Is there anyway to assert that this has been done against the list with N Unit ?


Unit testing controller which has a $state.go method in controller

How can I write unit tests for a function that has a $state.go () and which is expected to redirect to that particular sate?



$scope.inviteMembers = (id)=> {
$state.go('invite', {deptId: id});
}

mardi 30 décembre 2014

How to write failed unit test case for controller actions with Authorize attribute

My current structure of my application is -



public class MyController : Controller
{
private readonly MyRepository _repository;
[Authorize]
public ActionResult Index()
{
var items = _repository.GetAllItems();
if (items.Count() == 0)
return View("EmptyItems");
else
{
return View("List", items);
}
}
}
public class Repository : IRepository
{
public IEnumerable<TodoListModel> GetAllTodoItems()
{
var userid = _securityService.GetUser();
var list = _dbcontext.TotalItems.Where(e => e.UserId == userid);

return list;
}
}


Below is the unit test method I have written. Below unit test always succeeds. Could someone please advise how do I write unit to check if the user is authenticated or not from my code above ? I am new to MVC so any detailed explanation would be much appreciated.



[TestMethod]
public void IndexAction_Should_Return_View_For_AllItems()
{
//Arrage
var controller = MyController();

//Act
var result = controller.Index();

//Asset
Assert.IsNotNull(result as ViewResult);
}

Run main method on server for testing

I am using eclipse IDE with Apache Tomcat server. I have written a class that fetches data from database. I want to do unit test of this class by running its main class. As The function relies on Connection pool of server, I have to run in my server environment.


In eclipse when i run this class using (Run As --> Run On server), it runs it as a html file giving 404 error. Clearly, its not running the main method. So is there a way, I can do unit testing of such function with ease, without using some html interface.


Should I keep using mocks and stabs in domain testing?

I am creating an n-tire application following DDD. I have test projects for each individual layer. Right now I am using FakeItEasy to create mocks and stabs to run domain tests because I still haven't implemented my data access layer.


My question is, should I keep using mocks and stabs to test domain layer even after implementing data access layer so the test data is not depending on DAL ? Or should I use actual data retrieved through DAL to run domain tests?


Thanks!


How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?

Our team's application development involves using Effort Testing Tool to mock our Entity Framework's DbContext. However, it seems that Effort Testing Tool needs to be see the actual SQL Server Database that the application uses in order to mock our Entity Framework's DbContext which seems to going against proper Unit Testing principles.


The reason being that in order to unit test our application code by mocking anything related to Database connectivity ( for example Entity Framework's DbContext), we should Never need a Database to be up and running.


How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?


Should JavaScript Event Handlers Be Unit Tested

There are a lot of questions here about unit test event handlers in other languages, but I haven't been able to find a good answer when it comes to JavaScript. Specifically I'm talking about a case like:



// inside a "view" class definition:
handleCheckboxClick: function() {
this.relevantData.toggleSomeValue();
return false;
}

// later on:
$('#someCheckbox').on('click', view.handleCheckboxClick);


Clearly there is logic in the event handler (this.relevantData.toggleSomeValue()), but at the same time no other method will ever call this handler, so it's not like unit-testing it will catch some future refactoring-related bug. And in any given JavaScript codebase there are A LOT of these handlers, so it's a non-trivial amount of work to test them.


Plus, in many JS shops (certainly in ours) there are also feature-level tests being done with Selenium that would typically catch obvious UI issues (such as when an event handler breaks).


So, on the one hand I get that "unit testing of logic == good", and I don't want to shoot myself in the foot by not testing certain code if I will pay for it later. On the other hand this particular sub-set of code seems to have a high cost and low value when it comes to unit testing. Thus, my question for the SO community is, should JS developers unit test their event handling functions?


Mocking Application Variants in Unit Test for Custom Gradle Plugin That Depends On Android Plugin

I am developer a Gradle plugin. My plugin depends on the Android Gradle plugin. My plugin adds an action to some Android Gradle tasks and some tasks of it's own based on the application variant.



public class MyPlugin implements Plugin<Project> {
@Override
void apply(Project project) {
project.android.applicationVariants.all {
project.task(type: SendApkTask, dependsOn: it.assemble, "send${it.name.capitalize()}Apk")
}
}
}


I want to create a unit test for MyPlugin#apply(Project).



public class MyPluginTest {

Project project

@Before
public void setup() {
project = ProjectBuilder.builder().build()
}

@Test
public void testMyPluginAddsTasks() {
project.apply plugin: 'com.android.application'
project.apply plugin: CsPlugin

assertNotNull(project.tasks['sendReleaseApk'])
}
}


I can not get this assertion to pass. After debugging I found that project.applicationVariants is empty. How do I get the Android Gradle Plugin mocked effectively for my test?


how to unit test controller that in turn calls repository with two parameters

At present, I am writing unit tests for my controller. Below is the structure of my code in the project.


MyController Class



public class MyController : Controller
{
private readonly MyRepository _myRepository;

public MyController()
: this(new MyRepository())
{

}

[HttpGet]
public ActionResult Index()
{
var items = _myRepository.GetAllItems();
if (items.Count() == 0)
return View("EmptyItems");
else
{
return View("List", items);
}
}
}


MyRepository Class



public class MyRepository : IDisposable, IMyRepository
{
private readonly MyDbContext _dbcontext;
private readonly ISecurityService _securityService;


public TodoListItemsRepository() : this(new MyDbContext(), new SecurityService())
{

}
public TodoListItemsRepository(MyDbContext context, ISecurityService securityService)
{
_dbcontext = context;
_securityService = securityService;
}
public IEnumerable<MyModel> GetAllItems()
{
var userid = _securityService.GetUser();
var todoList = _dbcontext.MyList.Where(e => e.UserId == userid);

return todoList;
}
//Other Methods etc...
......
}


SecurityService class



public class SecurityService : ISecurityService
{
public int GetUser()
{
return (int)Membership.GetUser().ProviderUserKey;
}
}


Here all methods inside my repository depends on GetUser method. Hence, I have initialized it inside the constructor. The repository class is initialized from the controller constructor.


My issue is - I couldn't unit the Index action unless I need to initialize dbcontext and the securityservice. Could someone please advise me if I am doing the right thing or any changes required in the structure of my code so that I can unit test my application ? I am new to MVC. So, any suggestions would be much appreciated.


Can I have more than one expectation inside an it block?

Just wondering because this behaviour is a little weird:



it 'shoud call console.log on mouse enter', ->
expect(console.log.calls.count()).toEqual 1


The above passes with no problems. The below passes with no problems either:



it 'shoud call console.log on mouse enter with the correct parameters', ->
expect(console.log).toHaveBeenCalledWith 'mouse has entered'


But this:



it 'shoud call console.log on mouse enter with the correct parameters', ->
expect(console.log.calls.count()).toEqual 1
expect(console.log).toHaveBeenCalledWith 'mouse has entered'


Doesn't fail, but gives me this strange error:



Opera 26.0.1656 (Linux) [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] encountered a declaration exception FAILED
TypeError: Cannot read property 'expect' of null


How can I prevent this error from occuring? Or is it normal? Something to do with coffeescript perhaps? Coffeescript automatically returns the result of the last function...


Here's what my rendered code looks like: (those returns are looking a little dodgy)


Update



describe('supermanDirective', function() {
var run;
run = function() {
module('App');
return injectDirective('<div enter></div>');
};
describe('something', function() {
beforeEach(function() {
return run();
});
return it('should be defined', function() {
return expect(element).toBeDefined();
});
});
describe('something', function() {
beforeEach(function() {
spyOn(console, 'log');
element.trigger('mouseenter');
return run();
});
it('shoud call console.log on mouse enter', function() {
return expect(console.log.calls.count()).toEqual(1);
});
return it('should call it with correct parameters', function() {
return expect(console.log).toHaveBeenCalledWith('mouse has entered');
});
});
return describe('something', function() {
beforeEach(function() {
spyOn(console, 'log');
return run();
});
return it('shoud call console.log on mouse enter', function() {
return expect(console.log.calls.count()).toEqual(0);
});
});

How can i test async functions using express, mongoose and nodeunit?

How can i use node-mocks-http for testing async? for eg: I have this in my express router which can be reached through GET /category/list



var getData = function (req, res) {
Category.find({}, function (err, docs) {
if (!err) {
res.json(200, { categories: docs });
} else {
res.json(500, { message: err });
}
});
};


and in the test



var request = httpMocks.createRequest({
method: 'GET',
url: '/category/list',
body: {}
});
var response = httpMocks.createResponse();
getData(request, response);
console.log(response._getData());
test.done();


but the response does not contain the json (response comes back later after a few seconds). How can i test this? Any help is much appreciated.


How can I run a unit test project and return the results to an external application?

Currently, I am attempting to create a simple console app that does the following:



  1. Compile my project

  2. On Success, Compile/Run the Unit Tests

  3. On Successful tests, continue with the remainder of the program


I'm stuck on a couple of things:



  • How do I verify that the compilation was successful?

  • How do I execute the unit tests?

  • How do I verify that the unit tests passed?


Note: I'm not 100% sure that a custom console app is the right thing to do here, so I'm open to using existing tools/apps as long as they are very lightweight, cheap/free, etc.


How to skip first N tests in PHPUnit?

The scenario: run a huge batch of tests with PHPUnit and some test (say 537 of 1544) fails after many minutes. The change is small and unlikely to effect the previous tests, I'd like to be able to skip the first 536 tests doing something like this to "pick up where I left off":



phpunit --skip=536


Of course I will run all tests eventually, but right now, I don't want to have to wait many minutes to get back to the broken test(s). I know I can run a single suite but that is tedious/unhelpful if several dozen suites remain to be tested.


Is there a way? Or something even close?


Run "traditional" unit test in Nightwatch

I have a couple of utility functions, like validateEmail(email) that I would like to test directly, that is, without going through the UI. Is that possible through Nightwatch? I have all my UI testing in Nightwatch, and would like to stick with a single toolset for all my testing. And yes, I "get it" that most testing can (and perhaps should) be at the "public" level. But for completeness I also like to directly test some internal utility functions with every conceivable input. Thanks!


Mockito object methods of another class

Hi i request your help for know how emulate the method of the class "Validator.validateConnection();" The problem is that the method is validateConnection not exist in the class Class_Implementation and i don't want create that method in the class Class_Implementation. The method validateConnection do a connection to the data base for know if the connection is alive. When mockito runs i get an java.Lang.NullPointerException and is caused by NamingException need to specify class name in enviroment.


The real problem is when i call in the mockito test the line : Boolean resp = mockImpl.checkConnection(); in the checkConnection() the class Validator.validateConnection(); is trying to connect to data base i just want emulate this line and return true or false, but the problem is that the method validateConnection() is an instance of class Validator.


If need more information for fix this please let me know.



public class Class_Implementation {

public boolean checkConnection(){

boolean isConnectionalive = false;

Validator.validateConnection();

// another things for do

return false;

}

}

public class Validator {

public static Boolean validateConnection(){


Connection conn = new Connection();

Boolean connectionAlive = false ;
connectionAlive = conn.isConnectionAlive();

if (connectionAlive){

return true;
}else{

return false;

}
}

}


public class Connection {



public boolean isConnectionAlive(){


// Code for connection to DB
}

}

// class for do the test
@RunWith(PowerMockRunner.class)
@PrepareForTest({Class_Implementation.class,Validator.class})
public class TestConnection {

@Test
public void validate_Connection() throws Exception{



Class_Implementation mockImpl = PowerMock.createPartialMock(Class_Implementation.class);

PowerMock.mockStatic(Validator.class);


PowerMockito.when(mockImpl, Validator.validateConnection() ).thenReturn(true);

PowerMock.replayAll(mockImpl);


Boolean resp = mockImpl.checkConnection();

PowerMock.verifyAll();

Validate.notNull(resp);


}


}


How to fake C++ classes containing non-virtual functions?

I'm trying to bring some C++ legacy code under test. In particular, I have a class hierarchy, say, A < B < C (i.e., A is the superclass of B, and B is the superclass of C), and there is a global reference to an object of type C which is used from all over the system's code (singleton pattern). The goal is to replace that C object with some fake object (in fact, C is used to access a database).


My first attempt was to introduce interfaces IA, IB, and IC (which contain pure virtual versions of the functions of the corresponding class), let each class implement its interface, and change the type of the global C reference to IC. In the setup function of my tests, I would then replace the globally referenced C object with my own implementation of IC, making the whole system use my fake implementation.


However, classes A, B, and C each contain quite a few non-virtual functions. Now, if I would make the classes inherit from my interfaces, I would change the semantics of these functions from non-virtual to virtual (Feathers discusses this problem in "Working efficiently with legacy code", p.367). In other words: I have to check each and every call to my global object, and I have to make sure that after my changes, still the same functions are called. This sounds like a LOT of ERROR PRONE work to me.


I also thought about making the non-virtual functions "final", i.e., tell the compiler that the functions of A, B and C must not be hidden in subclasses (which would make the compiler tell me all potentially dangerous functions of B and C - if a function is not hidden in a base class, the above effect can not happen at all), but that doesn't seem to be supported by C++ (we are not yet using C++11, but even its final keyword only seems to be applicable to virtual functions).


To make the situation even more difficult, classes A, B, and C also contain public attributes, virtual functions, and also some template functions.


So my question is: How to cope with the situation I described above? Are there any C++ capabilities that I have missed, and which could help in my scenario? Any design patterns? Or even any refactoring tools? My main requirement is that the changes must be as safe as possible, since the classes I'd like to fake are rather crucial to the system... I would also be happy with an "ugly" solution which would allow me to put tests into place (and which could be refactored later if the system is appropriately covered with tests).


Mockito error when mocking Request Dispatcher

I am using "GetRequestDispatcher" to be able to forward attributes between a servlet and a JSP page, like so(the following is taken from the servlet):



request.setAttribute("AttributeValue",message);
request.getRequestDispatcher("DestinationPage.jsp").forward(request, response);


I am now using Mockito to mock the HttpSession as well as the HttpRequest, HttpResponse and the Dispatcher, as follows:



@Test
public void freeRiskAndAmountValidation() throws ServletException,IOException
{
//given
Mockito.doReturn("testAttr").when(session).getAttribute("A");
Mockito.when(request.getRequestDispatcher("DestinationPage.jsp")).thenReturn(dispatcher);
//when
bets.doGet(request,response);
//then
Mockito.verify(dispatcher).forward(request,response);
}


However, my test returns a null pointer exception(console) AND mockito gives me the following error:



Wanted, but not invoked.
dispatcher.forward(request,response);
Actually, there were zero interactions with this mock.


Could anyone tell me what is causing this error, and whether there is a better way of mocking the dispatcher?


How to write the description of specs2 test correctly?

I use specs2 to write Scala test, and have some questions about how to write the description.


Suppose I have a test scenario, that:



Given: a button
When: I click on it
Then: it will show a dialog with "Hello world" text


I have several ways to write the test in specs2,


1.



"a dialog with 'Hello world' text" should {
"be shown if I click on the given button" in {
// click on the button and checking dialog
}
}


2.



"When I click on the given button, it" should {
"show a dialog with 'Hello world' text" in {
// click on the button and checking dialog
}
}


3.



"When I click on the given button, a dialog with 'Hello world' text" should {
"be shown" in {
// click on the button and checking dialog
}
}


I'm not sure which one is the best, or is there any better way?


Unit Test project is insisting that it requires a reference to EntityFramework

I have an issue that just came up in a unit test project insisting that it requires a reference of EntityFramework, I am convinced it doesn’t need it. Other projects are referencing the project/extension method that the unit test project is referencing/testing and using the extension method just fine without a reference to EntityFramework.


I have found that if I simply execute the extension method as a static method in the unit test project then the unit test project compiles just fine – just completely baffled. I did not see anything informative in the build output.


This does not compile:



[TestMethod]
public void BuildsEmptyRadioButtonList()
{
var htmlHelper = Creator.GetHelper();

var radioButtonList = htmlHelper.RadioButtonList("RadioGaga", new SelectListItem[0]);

var expected = MvcHtmlString.Create(@"...");
Assert.AreEqual(expected.ToHtmlString(), radioButtonList.ToHtmlString());
}


Build output:



1>------ Build started: Project: HA.Shared.Utilities.Mvc.Tests, Configuration: Debug Any CPU ------
1>C:\hatfs\Web2014\4-Test\Source\HA.Shared.Utilities.Mvc.Tests\HtmlHelperRadioExtensionsTests.cs(25,17,25,20): error CS0012: The type 'System.Data.Entity.IDbSet`1<T0>' is defined in an assembly that is not referenced. You must add a reference to assembly 'EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.
========== Build: 0 succeeded, 1 failed, 8 up-to-date, 0 skipped ==========


The error is pointing to the “var” in the line that starts with “var radioButtonList”, I tried changing the “var” to “IHtmlString” with no change.


This does compile:



[TestMethod]
public void BuildsEmptyRadioButtonList()
{
var htmlHelper = Creator.GetHelper();

var radioButtonList = HtmlHelperRadioExtensions.RadioButtonList(htmlHelper, "RadioGaga", new SelectListItem[0]);

var expected = MvcHtmlString.Create(@"...");
Assert.AreEqual(expected.ToHtmlString(), radioButtonList.ToHtmlString());
}


Build output:



1>------ Build started: Project: HA.Shared.Utilities.Mvc.Tests, Configuration: Debug Any CPU ------
1> HA.Shared.Utilities.Mvc.Tests -> C:\hatfs\Web2014\4-Test\Source\HA.Shared.Utilities.Mvc.Tests\bin\Debug\HA.Shared.Utilities.Mvc.Tests.dll
========== Build: 1 succeeded, 0 failed, 8 up-to-date, 0 skipped ==========


The signature of the RadioButtonList method is: public static MvcHtmlString RadioButtonList( this HtmlHelper htmlHelper, string name, IEnumerable<SelectListItem> listItems, object radioButtonHtmlAttributes = null, object labelHtmlAttributes = null, bool vertical = false)


Steps to install Microsoft SQL Server Database on a Visual Studio Online account, and use it in our automated build and automated unit tests

One of our customers uses Visual Studio Online ( http://www.visualstudio.com/en-us/products/what-is-visual-studio-online-vs.aspx ) which is based on capabilities of Team Foundation Server (TFS)


We were researching how to do automated Builds and automated Unit Tests using the Visual Studio Online account.


Using Visual Studio 2012 IDE, I was able to setup a build with the Visual Studio Online account. However, my Unit Tests need a Microsoft SQL Server Database to run properly. On my development computer, I have Microsoft SQL Server Express installed.


-What are the Steps to install Microsoft SQL Server Database on a Visual Studio Online account, and use it in our automated build and automated unit tests that are run within our Visual Studio Online account


Why is the line throwing an exception not covered

I have a piece of code that is shown as not covered, even though I have a specific test to trigger it. Here is the code:



private function processConfiguration($baseFileName, ConfigurationInterface $definition)
{
$fileName = $this->createEnvironmentSpecificFileName($baseFileName);

$processor = new Processor();
$configuration = $this->loader->load($fileName);

if (is_null($configuration) === true) {
throw new \UnexpectedValueException('The configuration file can not be empty.');
}

return $processor->processConfiguration($definition, $configuration);
}


And the unit test that is made to specifically make this function throw an exception:



public function testGetQueueSettingsWithEmptyFile()
{
$this->setExpectedException('UnexpectedValueException');
$loader = $this->mockLoaderWith([], null, 0);
(new Reader($loader))->getQueueSettings();
}


However, even though this unit test passes, when I get the coverage report using Codeception, which probably gets the report from PHPUnit, this line is red:


throw new \UnexpectedValueException('The configuration file can not be empty.');


Why?


Modern UnitTest++ replacement

I'm methodically upgrading my source code to get with the C++11 times, and one of the pieces that a lot of my code interacts with is UnitTest++.


I dedicate the latter half of every one of my implementation cpp files to unit tests, so they include many



TEST(testname) {
// test code
}


declarations.


Now, UnitTest++ is about 8 years old and it still compiles great, so I have no urgent need to replace it. However I have found that it is probably no longer being maintained (though its existing features certainly seem solid enough, this is a bad sign) as the website and sourceforge are down.


So even though my code works fine now, it may benefit me now to switch to a better system earlier rather than later, because it will reduce translation burden in the future.


I looked around a bit and there seem to be a few options available to me. Particularly interesting is libunittest and others like CATCH which is header-only.


My question is for folks who have maybe had experience with UnitTest++ in the past and other unit testing systems, what has worked well for you and if you have any recommendations. I am looking for something that is extremely portable and which has zero external dependencies beyond a C++98/03 or C++11 compiler (gcc, clang, msvc) and the standard libraries, and where being header-only is a plus but not necessary.


So I guess my preferences do tend to narrow down the options quite a bit. Even with UnitTest++ I enjoy its portability and self-containedness, but I have had to write a good ~100 or so lines worth of code to extend it to be flexible for me in two ways:



  • allow me to specify specific tests to run (whether it's choosing the tests by name, or by source file in which they're implemented, or the test suite name)

  • customize reporting behavior for tests such as timing, etc


lundi 29 décembre 2014

Should I unittest private/protected method

This is actually language agnostic. But I'll give you context in python.


I have this parent class



class Mamal(object):
def __init__(self):
""" do some work """

def eat(self, food):
"""Eat the food"""
way_to_eat = self._eating_method()
self._consume(food)

def _eating_method(self):
"""Template method"""

def _consume(self, food):
"""Template method"""


Here eat is the only public method and _consume, _eating_method are actually protected method which will be implemented by child classes.


What will you test when you have written only the Mamal class?


Obviously all 4 methods.


Now lets introduce a child



class Whale(Mamal):
def _eating_method(self):
"""Template method"""

def _consume(self, food):
"""Template method"""


Look at this class. it has only 2 protected method.


Should I test all 4 methods of Whale (including 2 inherited) or just test the changes introduced (only overrided 2 methods)?


What is the ideal case?


unit testing rest service using juni,mockito

i am new to unit testing, i am trying to test a controller which gets a data from database. My issue is when i am perform get request to uri, i could not able to get the data from database.If i run the application i can able to fetch the data. but i could not do it from test code.


here is my test code:



public class RestTest {

private MockMvc mockMvc;

@Autowired
DataSource dataSource;

@Autowired
private Example exampledao;
@Autowired
private WebApplicationContext webApplicationContext;
@Before
public void setUp() {

Mockito.reset(exampledao);

mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext).build();
}
@Test
public void findAllObjects() throws Exception

{
Components first = new TodoBuilder()
.cname("asdfasf")
.mdesc("asdcb")
.cdesc("asdfa")
.ccode("asdf")
.unitrateusd(24)
.build();

when(exampledao.list()).thenReturn(Arrays.asList(first));
mockMvc.perform(get("/getdata"))
.andExpect(status().isOk())


.andExpect(content().contentType(TestUtil.APPLICATION_JSON))
.andExpect(jsonPath("$", hasSize(0)))
//.andExpect(jsonPath("$", hasSize(0)))
.andExpect(jsonPath("$[0].cname", is("asdfasf")))
.andExpect(jsonPath("$[0].mdesc", is("asdcb")))
.andExpect(jsonPath("$[0].cdesc", is("asdfa")))
.andExpect(jsonPath("$[0].ccode", is("asdf")))
.andExpect(jsonPath("$[0].unitrateusd", is(24)));

verify(exampledao, times(1)).list();

}


here is my actual controller which i need to check:



@RequestMapping(value="/getdata" , method=RequestMethod.GET)
public List<Components> listContact(ModelAndView model) throws IOException{
System.out.println("hii.. i made atest call");
List<Components> listContact;
listContact= exampeldao.list();


System.out.println(" but i dnt have any data.....");

return listContact;

}


I can make a call from test controller to "/getdata". But listContact=exampeldao.list() is not executing, list() is defined in some other class.but i can able to print next print statement.


when i run testclass i am getting some SecurityException:class"org.hamcrest.Matchers" signature information does not match with signature information of other class in same package.


can any let me know where i am going wrong


can checked in file once installed jasmine js in a windows machine

when I use jasmine js for unit testing with visual studio, I can't check out my solution file when I install jasmine js. I am using visual studio 2013 and windows 8.1. can some one give me a reason and a solution for this.


java.lang.AssertionError: expected

My TestNG test implementation throws an error despite the expected value matches with the actual value.


Here is the TestNG code:



@Test(dataProvider = "valid")
public void setUserValidTest(int userId, String firstName, String lastName){
User newUser = new User();
newUser.setLastName(lastName);
newUser.setUserId(userId);
newUser.setFirstName(firstName);
userDAO.setUser(newUser);
Assert.assertEquals(userDAO.getUser().get(0), newUser);
}


The error is:



java.lang.AssertionError: expected [UserId=10, FirstName=Sam, LastName=Baxt] but found [UserId=10, FirstName=Sam, LastName=Baxt]


What have I done wrong here?


Test AngularJs view where template is read from a file

I want to test an AngularJs view with an external template. Almost all the examples I have found demonstrate code like this:



var element = angular.element('<div>something goes here... </div>');
element = $compile(element)(scope)


But my HTML is a little more complicated than that so I want the test to read it from a file. Where can I find an example of that?


Security-scoped NSURL bookmarks on Xcode Server

In the unit tests for my app, I create an app-scoped NSURL bookmark, and then a document-scoped one from it, because the part of my app I'm testing expects that. These tests have always worked correctly on my machine, but are now failing when run on an Xcode Server bot. I don't codesign the unit test bundle, but tried doing so as a troubleshooting step, which made no difference.



NSError *error = nil;
NSURL *originalURL = [NSURL fileURLWithPath:@"gitrepo/path/to/a/file.txt"];
NSData *appScopedBookmark = [originalURL bookmarkDataWithOptions:NSURLBookmarkCreationWithSecurityScope
includingResourceValuesForKeys:nil
relativeToURL:nil
error:&error];

NSError *docScopedError = nil;
BOOL isStale = NO;
NSURL *url = [NSURL URLByResolvingBookmarkData:appScopedBookmark
options:NSURLBookmarkResolutionWithSecurityScope
relativeToURL:nil
bookmarkDataIsStale:&isStale
error:&docScopedError];

XCTAssertNil(error, @"Error while resolving app-scoped bookmark");

[url startAccessingSecurityScopedResource];

NSURL *relativeToURL = ...
NSData *bookmark = [url bookmarkDataWithOptions:NSURLBookmarkCreationWithSecurityScope
includingResourceValuesForKeys:nil
relativeToURL:relativeToURL
error:&docScopedError];

[url stopAccessingSecurityScopedResource];

XCTAssertNil(docScopedError, @"Error while creating document-scoped bookmark from URL:\n%@\nrelative to: %@",
url, relativeToURL);


The final assertion fails, and the message that gets logged verifies that both URLs are not nil, and I was able to verify that both files do exist. They are both contained within the Git checkout directory, which the account has full access to. The relativeToUrl points to a file created earlier in the test.


The NSError produced has the following info:



"Error Domain=NSCocoaErrorDomain Code=256 "The file couldn’t be opened." (Item URL disallowed by security policy) UserInfo=0x10691c6d0 {NSDebugDescription=Item URL disallowed by security policy}"



What security policy could it be referring to, and how would I update it?


Should I have failing tests?

Please note: I'm not asking for your opinion. I'm asking about conventions.


I was just wondering whether I should have both passing and failing tests with appropriate method names such as, Should_Fail_When_UsageQuantityIsNegative() , Should_Fail_When_UsageQuantityMoreThan50() , Should_Pass_When_UsageQuantityIs50().


Or instead should I code them to pass and keep all the tests in Passed condition?


Thanks! Cheers!


Programmatically dismissing simulator

I know that Xcode requires the simulator to launch while running a unit test. During my CI builds, I run unit tests. Sometimes the simulator hangs up the tests, and the tests complete once I dismiss the simulator.


Is there a way to test: 1. If we are in "test mode" in the app delegate 2. If we can programatically dismiss the simulator, either as soon as it appears or after x amount of time.


Maybe I should write a script using AppleScript that I can include in my project and call upon it if the build has been running for more than x amount of time?


How to set up unit test boilerplate with grunt-init-gruntfile automatically?

I've been working with grunt cli's scaffolding (grunt-init-gruntfile) for a quick-start on learning Grunt / node. My expectation (based on grunt-init jquery) was I would run the following commands, filling out the fields as I went along



grunt-init gruntfile
npm init
npm install
grunt


and the result would be an empty project with unit tests, etc running against the empty js file (pkgname.js) as set in npm init. What basic steps am I missing to get from those commands to having the unit test boiler-plate ready to go?


What I have now is "Error: no test specified" after running grunt and the remaining tasks seem to be producing an empty files in my dist folder (except for the label). I notice it seems to assume the creation of ./lib/[package.name].js, but what I'm not sure is the quickstart to the qunit tests running via the Gruntfile. Is there a manual element here to creating the boilerplate for the test page and js files, etc? My goal is learning how to auto-create as much as possible using the grunt-cli.


I'm fairly new to node and I think there are some fundamentals I'm not grasping yet, I'm happy to elaborate if this question is too broad, poorly formed, etc.


Getting Error in Asmx Web Service Unit Testing

I am getting below error message while running unit test for ASMX web service


Failed MyFunction The ASP.NET Web application at 'D:\MyProjectFolder' is already configured for testing by another test run. Only one test run at a time can run tests in ASP.NET. If there are no other test runs using this Web application, ensure that the Web.config file does not contain an httpModule named HostAdapter.


i checked in web.config. below line already added in web.config



<httpModules>
<add name="HostAdapter" type="Microsoft.VisualStudio.TestTools.HostAdapter.Web.HttpModule, Microsoft.VisualStudio.QualityTools.HostAdapters.ASPNETAdapter, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
</httpModules>

Generate python unit test document

My bosses want a list of all the unit tests with their descriptions. Since this will change frequently I'd like to find a way to generate it instead of trying to manually keep it up to date. I am using python for this project. Is there some way to make doxygen or some other tool do this?


Unit testing a PyQt standalone "blocking" dialog

I'm creating a series of "standalone PyQt widgets" that can be used in procedural programs; the original idea is from the easygui project. To give a concrete idea of their use, instead of writing:



>>> name = input("What is your name? )


I could write



>>> name = get_string("What is your name? )


and a dialog would pop up, inviting the user to enter the response. Like Python's input, get_string is blocking the execution of the program, waiting for input.


I would like to set up some automatic testing of these widgets. I tried with a third-party module (pyautogui) which interacts with GUI programs, but the result is not totally reliable and probably not suitable for using a service like https://travis-ci.org/. I would prefer to use PyQt's QTest but do not know how to connect with the dialog: I suspect I may have to use threading (as I had to do with the pyautogui solution) as the dialog is effectively blocking the execution of the program.


Here is a simple implementation of get_string() mentioned above:



from PyQt4 import QtGui

def get_string(prompt="What is your name? ", title="Title",
default_response="PyQt4", app=None):
"""GUI equivalent of input()."""

if app is None:
app = QtGui.QApplication([])
app.dialog = QtGui.QInputDialog()
text, ok = app.dialog.getText(None, title, prompt,
QtGui.QLineEdit.Normal,
default_response)
app.quit()
if ok:
return text

if __name__ == '__main__':
print(get_string()) # normal blocking mode
app2 = QtGui.QApplication([])
# perhaps start a delayed thread here, using QTest
print(get_string(app=app2))


I have found one example of unit-testing a PyQt application (http://johnnado.com/pyqt-qtest-example/) but it did not help me find a solution in terms of connecting to a dialog.


Any help would be appreciated.


Unit Test before Nuget Publish in TFS 2013

I have a class library that I publish to our internal nuget server. Inside of a MSBuild script, I use nuget.exe to publish it. The project also has a comprehensive unit test assembly. It seems that TFS uses the build script to build the project, deploys the nuget package, then runs the unit tests. Obviously, this is less than ideal. The unit tests should run first, then the deploy.


How would one configure TFS to do this? Maybe a postbuild script that runs nuget.exe publish?


Unittest failed with sys.exit

I'm trying to run my tests with unittest. Here is my structure :



projectname/
projectname/
foo.py
bar.py
tests/
test_foo.py
test_bar.py


I run it with :



cd tests/
python -m unittest discover


But in one file, for example foo.py, I use a sys.exit(0), and unittest doesn't really like it :



$ python -m unittest discover
....E.
======================================================================
ERROR: test_foo (...)
----------------------------------------------------------------------
Traceback (most recent call last):
...
...
File "/home/.../projectname/foo.py", line 12, in write
sys.exit(0)
SystemExit: 0

----------------------------------------------------------------------
Ran 6 tests in 0.018s

FAILED (errors=1)


The sys.exit() use is voluntary, I can't remove it. I know there is an option called exit for the unittest.main function :



if __name__ == "__main__":
unittest.main(exit=False)


But I want to test all the files in the tests directory. Another way is to do :



if __name__ == '__main__':
tests = unittest.TestLoader().discover('.')
unittest.TextTestRunner(verbosity=1).run(tests)


It finds all the test_ files, but the sys.exit() makes unittest crash.


RhinoMock in Specification.Machine, expectation violation expected 1, get 0

I have a problem with mocking my interface, I want to check if the method of my interface is call, so my interface/class looks like this:



IMyInterface
{
IMyInterface Method(string lel);
}

MyClass : MyInteface
{
public override IMyInterface Method(string lel)
{
//do something;
}
}

AnotherClass
{
private IMyInterface _instance;
public void AnotherMethod()
{
//do something with this instance of IMyInstance
}
}


and my test class looks like this:



[Subject(AnotherClass)]
abstract class AnotherClassTest : AnotherClass
{
protected static IMyInterface MyInterface;
Establish context = () =>
{
MyInterface = fake.an<IMyInterface>(); // MockRepository.GenerateStrictMock<IMyInterface>(); this also doesn't work properly.
MyInterface.Stub(x => x.Method("lel")).IgnoreArguments().Return(MyInterface);
}
}

[Subject(AnotherClass)]
class When_cos_tam_cos_tam : AnotherClassTest
{
Establish context = () =>
{
//MyInterface.Stub(x => x.Method("lel")).IgnoreArguments().Return(MyInterface);
}
Because of = () => sut.AnotherMethod();
It Should_cos_tam = () => MyInterface.AssertWasCalled(x => x.Method("lel"));
}


And I'm getting following error:



Rhino.Mocks.Exceptions.ExpectationViolationException' occurred in Rhino.Mocks.dll
IMyInterface.Method("lel")); Expected #1, Actual #0.

Mocked class function being called instead of only being mocked

I am mocking a class in spock and just want to check whether the methods inside the method being tested are getting called or not , I don't want internal methods to run.



class CodeProcessor{
void processMessage(Request request){
//Some implementation
encodeMessage(request)
}
void encodeMessage(Request request){
//Some implementation}
}



def "process code test"(){
given:
CodeProcessor codeProcessor = Mock(CodeProcessor)
Request request = new Request()
request.setId(10)
when:
codeProcessor.processMessage(request)
then:
1 * codeProcessor.encodeMessage(request)
}


In above case i need to check only whether encodeMessage is being called or not. But when i run the above code it runs code inside encodeMessage() also. This is not expected behavious. Can anyone point out where I am going wrong here


parallel code unit tests in C# - is there a framework / tools / nunit extension

I have to test the behavior of components when executing parallel operations. I haven't found a library / toolbox / unit test extension helping me doing this.


I want to produce exact execution sequences in the parallel code fragments, so I could do this with events manually. But this is very time consuming and annoying.


Error in Unit testing help me to solve this

I got error in this testing can anyone sove this why this happen


describe('Controller: LoginCtrl', function () { beforeEach(function(){



var $httpBackend;
module("loginModule");
});

beforeEach(inject(function($controller, $rootScope, $location, $auth, $httpBackend) {
this.$location= $location;
this.$httpBackend=$httpBackend;
this.scope= $rootscope.$new();
this.redirect = spyOn($location,'path');

$controller('LoginCtrl',{
$scope: this.scope,
$location: $location,
auth: $auth

});

}));



describe("succesfully loggin in",function(){
it("should redirect you to home",function(){
//arrange
$httpBackend.expectPOST('/login',scope.login).respond(200);
//act
scope.login();
$httpBackend.flush();

//assertion
expect(this.redirect).toHaveBeenCalledWith('/login');
});

});


});


Error i got is



Error: [$injector:unpr] Unknown provider: $authProvider <- $auth
http://errors.angularjs.org/1.2.28/$injector/unpr?p0=%24authProvider%20%3C-%20%24auth
at C:/xampp/htdocs/app/bower_components/angular/angular.js:3801
at getService (C:/xampp/app/bower_components/angular/angular.js:3929


) at C:/xampp/htdocs/app/bower_components/angular/angular.js:3806 at getService (C:/xampp/htdocs/app/bower_components/angular/angular.js:3929 ) at invoke (C:/xampp/htdocs/app/bower_components/angular/angular.js:3956) at workFn (C:/xampp/htdocs/app/bower_components/angular-mocks/angular-mocks .js:2177)



undefined
ReferenceError: Can't find variable: $httpBackend
at C:/xampp/htdocs/test/spec/controllers/login.js:32


PhantomJS 1.9.8 (Windows 8): Executed 1 of 1 (1 FAILED) ERROR (0.02 secs / 0.021 secs) Warning: Task "karma:unit" failed. Use --force to continue.


dimanche 28 décembre 2014

i am unable to make the smart unit test in visual studio 2015? [duplicate]


This question already has an answer here:




this is my code.



class Program
{
public static void cal(int n1, int n2,
out int add, out int sub, out int mul, out float div)
{
add = n1 + n2;
sub = n1 - n2;
mul = n1 * n2;
div = (float)n1 / n2;
}
static void Main(string[] args)
{
int n1, n2;
int add, sub, mul;
float div;
Console.Write("Enter 1st number");
n1 = Convert.ToInt32(Console.ReadLine());
Console.Write("\nEnter 2nd number");
n2 = Convert.ToInt32(Console.ReadLine());

Program.cal(n1, n2, out add, out sub, out mul, out div);
Console.WriteLine("\n\n{0} + {1} = {2}", n1, n2, add);
Console.WriteLine("{0} - {1} = {2}", n1, n2, sub);
Console.WriteLine("{0} * {1} = {2}", n1, n2, mul);
Console.WriteLine("{0} / {1} = {2}", n1, n2, div);

Console.ReadLine();
}

}


when i goes on method or class and perform right click on it and make a smart unit test then it show an error like- the selected type is not visible and can not be explored/ can not run the test for selected type bcoz the type is not visible. please help me to short out this problem.


Unable to retrieve data in unit testing

I have a question about retrieving data, which is something I am trying to learn and also first time to used it.



Error for test failed: "Test method TestBusinessLogic.MediaDurationBLTest.OpenModelTest threw exception: System.Exception: Unable to retrieve Media Duration Model Another user has already updated the model. Please refresh and try again."



MediaDurationBLTest.cs


This is the main method:



[TestMethod()]
public void OpenModelTest()
{
MediaDurationDS mds = new MediaDurationDS();
PopulateTestDataSet(mds);

MediaDurationBL target = new MediaDurationBL();
TestBusinessLogic.BusinessLogic_MediaDurationBLAccessor accessor = new TestBusinessLogic.BusinessLogic_MediaDurationBLAccessor(target);
//assign accessor to mds
accessor.mMediaDurationDataSet = mds;

int modelID = 5514;

target.OpenModel(modelID);

Assert.AreEqual(20, mds.Tables.Count, "# of tables retrieved are different");

//We are creating copy of ProjectMetricData, check if copyTable and original table are same
//except projectmetrictdata has pf&d and client does not so subtract that.
int pfanddRows = 2;
int projectMetricDataRows = accessor.mMediaDurationDataSet.ProjectMetricData.Rows.Count;
int copiedRows = projectMetricDataRows - pfanddRows;
if (copiedRows < 0)
copiedRows = 0;

Assert.AreEqual(accessor.mMediaDurationDataSet.ClientProjectMetricData.Rows.Count, copiedRows, "project metric data copy not created");

}


This is the inner code of the "target.OpenModel(modelID);", I get the error and straight away jump to "catch (Exception e)"), my data was empty at here "mMediaDurationDataLayer.GetModelDetails(mMediaDurationDataSet, modelID);", how do I solve the error?



public DataSet OpenModel(int modelID)
{
try
{
mMediaDurationDataSet = new MediaDurationDS();
mMediaDurationDataLayer.GetModelDetails(mMediaDurationDataSet, modelID);

//ConvertToLocalTime(mMediaDurationDataSet.Model, "ClientLastUpdateDate");
ConvertToLocalTime(mMediaDurationDataSet.ModelActivity, "ClientLastUpdateDate");

//IF MODEL IS MOR, ACT
CreateProjectForMORModel(modelID);

//COPY PROJECT METRIC DATA TABLE INTO CLIENTPROJECTMETRICDATA
foreach (MediaDurationDS.ProjectMetricDataRow pmdr in mMediaDurationDataSet.ProjectMetricData.Rows)
{
//WE DONT WANT PF&D IN CLIENT TABLE
if (!pmdr.MetricTypeName.Equals(PFANDDPARAMETER))
{
CreateClientProjectMetricDataRow(pmdr, pmdr.ProjectMetricID);
}

}

mMediaDurationDataSet.AcceptChanges();
mMediaDurationDataSet.WriteXml("C:\\MediaDurationTestDataSet.xml");
return mMediaDurationDataSet;
}
catch (Exception e)
{
string errorMessage = "Unable to retrieve Media Duration Model " +Environment.NewLine + e.Message;
throw new Exception(errorMessage);
}
}



public class MediaDurationDL
{
ProjectManagerDL mProjectManagerDL;

public void GetModelDetails(DataSet mediaDurationDataSet, int modelID)
{
Database db = X.XXX.WindowsApplicationTemplate.ApplicationDatabase.DatabaseFactory.CreateDatabase();

string sqlProcedure = "uspMediaDurationGetModel";
DbCommand dbCommand = db.GetStoredProcCommand(sqlProcedure);
UtilityDL.SetCommandTimeout(dbCommand);
db.AddInParameter(dbCommand, "ModelID", DbType.Int32, modelID);

string[] tables = new string[] { "LaborCategory", "ProcessCategory", "Media", "Activity", "Time", "Model", "ModelTime", "ModelActivity",
"Project", "ProjectAccess", "MetricType", "ProjectMetric", "ProjectMetricData" };//, "Metric", "MetricData"};

// RETRIEVE DATA FROM DB AND LOAD INTO DATASET
mediaDurationDataSet.Clear();
//PrintAllErrs(mediaDurationDataSet);
db.LoadDataSet(dbCommand, mediaDurationDataSet, tables);
//PrintAllErrs(mediaDurationDataSet);
}

Unit tests failing without any message if linked to a framework

Setup


I've created a simple isolated case to demonstrate and reproduce the problem.


I've 2 Swift Cocoa Touch Frameworks (Lib1 & Lib2 - Lib2 is dependent on Lib1 - single swift file with single unit test in each)


Lib2 unit tests will only work if you remove the linked Lib1. Interestingly both projects works on their own if run independently.


Question: is this a bug in Xcode? any workarounds?


Files: https://www.dropbox.com/s/kycnvt1qvz8zw4o/LibTest.zip?dl=0


Ruby now can be unit test done on print method

How can be unit test done on the following printing method call get_list(). With assert_equal("["apple", "orange", "pear"]",@list.get_list()) result is nill



@list = ["apple", "orange", "pear"]

def get_list()
i = 0
while (i < list.size())
puts list[i]
i = i + 1
end
end


Please can somebody give me tip please


Jasmine have createSpy() return mock object

I'm trying to mock up a response object, and it looks something like this:



var res = {
status: jasmine.createSpy().andReturn(this),
send: jasmine.createSpy().andReturn(this)
}


This returns the jasmine object. I'd really like to return the original res variable containing the mocked functions. Is that possible? I'm mainly implementing this to unit test functions contains res.status().send(), which is proving to be difficult.


passing arguments to nosetest

I write my nose unit tests to be fairly complete, but as a result they can take a while to run. I would like to be able to pass in an optional command-line argument to run some quick version of the tests (e.g. try a handful of possible inputs, instead of every possible input).


Ideally, I'd be able to say something like:


nosetest --quick my_module/tests/my_test.py


And in my_test.py, have:



def test_something():
if nose.command_line_args.quick:
<run quick test>
else:
<run long test>


Is there a simple way to do this? I know that one way might be to write a nose plugin, but I got scared away when the docs said I needed to install any plugins I write using setuptools. Learning setuptools to install a nose plugin all to just pass in a flag is a bit of yak-shaving I'd love to avoid if I can.