mardi 31 mars 2015

Is it write to write multiple test classes for one source class

Currently I am working in a legacy code. I have introduced a new method to a class. This class does not have a corresponding test class. The method I introduced let's called it isValidState.


This method will only return a boolean considersing some of the states(properties in source class)


Ex



boolean isValidState(){
return a && b && c;(a ,b,c are properties of the source class)
}


To test this method I am thinking of having 8 test methods.(since there can be 8 valid combinations).


Since source class does not have a test class i have to write a new one which is fine. But in future i might need to add more methods like this to the source class and my Test class will grow heavily and become a very lengthy class.


So to avoid that is it correct to have multiple test classes which will test the functionalities of one source class ? Or is there a better way to have a structured test class/s for one single source class


Thanks in advance


How to increase unit test execution time?

Hello I am writing unit test for application having bootstrap modal popups.


The issue is modal popup has animation that took longer to open modal. When I trigger apply button click on modal that leads to exception. As modal popup not loaded.


Removing below line from bootstrap.js fix the issue but I can't change it or remove it as it in my application.



doAnimate ?
this.$backdrop.one($.support.transition.end, callback) :
callback()


To overcome this issue I have tried timeout, settimeout, delay functions but that leads to the error: Error: timeout of 2000ms exceeded



this.$fixture.find(".apply").trigger('click'); // opens modal popup

$('.apply-yes').delay(1900).trigger('click'); // clicks on apply button on modal


Please suggest an appropriate solution to problem without changing anything in bootstrap or application.


karma:unit fails with tmp browserify errors

I'm having the following issue and I'm the only developer in a team of 6 experiencing this issue.


When I run the following command: $ grunt unit (Task: clean:reports install_custom_coverage karma:unit), I receive the following result:



...
DEBUG [web-server]: serving: C:\Dev\life-web_components\node_modules\karma\static/context.html
PhantomJS 1.9.8 (Windows 7) ERROR
TEST RUN WAS CANCELLED because this file contains some errors:
C:/cygwin/tmp/2cfb2e9479b44a59f6d3c57d366bd5b4.browserify


IE 8.0.0 (Windows 7) ERROR
TEST RUN WAS CANCELLED because this file contains some errors:
C:/cygwin/tmp/2cfb2e9479b44a59f6d3c57d366bd5b4.browserify

Chrome 41.0.2272 (Windows 7) ERROR
TEST RUN WAS CANCELLED because this file contains some errors:
C:/cygwin/tmp/2cfb2e9479b44a59f6d3c57d366bd5b4.browserify



DEBUG [karma]: Run complete, exiting.
DEBUG [launcher]: Disconnecting all browsers
DEBUG [framework.browserify]: cleaning up
DEBUG [launcher]: Process PhantomJS exited with code 0
DEBUG [temp-dir]: Cleaning temp dir C:\cygwin\tmp\karma-34162292
DEBUG [launcher]: Process Chrome exited with code 0
DEBUG [temp-dir]: Cleaning temp dir C:\cygwin\tmp\karma-61774528
DEBUG [reporter.junit]: JUnit results written to "C:/Dev/life-web_components/reports/unit_tests.xml".

DEBUG [launcher]: Killed extra IE process 6528
DEBUG [launcher]: Process IE exited with code 0
DEBUG [temp-dir]: Cleaning temp dir C:\cygwin\tmp\karma-94332604
Warning: Task "karma:unit" failed. Use --force to continue.

Aborted due to warnings.


I've tried a range of suggestions, restarting CMDER, restarting machine, deleting node_modules, reinstalling global node modules, set autoWatch to false and various other attempts with no luck.


As I'm the only developer in the team experiencing the issue, it appears to be environment related.


How to refer to the datasource directory in MS Visual Studio unit test?

I have created an App.config file in the MS Visual Studio. The content of the App.config file is as below:



<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="microsoft.visualstudio.testtools" type="Microsoft.VisualStudio.TestTools.UnitTesting.TestConfigurationSection, Microsoft.VisualStudio.QualityTools.UnitTestFramework, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
</configSections>
<connectionStrings>
<add name="MyExcelConn" connectionString="Dsn=Excel Files;dbq=TestCases\data.xlsx;defaultdir=.; driverid=790;maxbuffersize=2048;pagetimeout=5" providerName="System.Data.Odbc" />
</connectionStrings>
<microsoft.visualstudio.testtools>
<dataSources>
<add name="MyExcelDataSource" connectionString="MyExcelConn" dataTableName="TestData$" dataAccessMethod="Sequential"/>
</dataSources>
</microsoft.visualstudio.testtools>
</configuration>


I have created a folder named "TestCases" in the same directory as the App.config file. I put the data.xlsx file inside the TestCases folder. For your information, data.xlsx is the data source of my unit tests. I coded the directory of the data source as TestCases\data.xlsx but it doesn't work. I don't want to hard code it as "C:\User...." as this would require me to change the code if I wish to run the code on another computer. Any suggestions?


Intern - window is undefined

I have a loop of tests running in intern-geezer, with about twenty out of a hundred very similar tests running successfully. Then suddenly:



FATAL ERROR
ReferenceError: window is not defined


and the loop stops. There are no explicit calls to window or document in my code. It's pure JS. I'm using intern-geezer, 2.2.2. The line numbers referenced in the error stack make absolutely no sense. They're way off.


I've read the suggestion to switch from the command:



./node_modules/.bin/intern-client config=tests/intern


to:



./node_modules/.bin/intern-runner config=tests/intern


but I don't want to connect to a server or open a browser (there's a separate, strange loading error occurring there which seems specific to geezer). I want to get this going at the command line. Grateful for any help, I'm totally new to Intern.


c++ dependency injection to test a class that class system calls

I am trying to use template dependency injection to test a C++ class that uses C system calls to operate over a file descriptor. The ideia is to have an abstract class and an instance to wrap the system calls like read(), write(), etc. Then I use a mock to test my target class. The abstract class and system calls looks like: (I am going to omit parameters to be clear)



class OSCall{
read()=0;
write()=0
};
class DefaultOSCall : public OSCall{
read(){...}
write(){...}
}


Later I inject the OSCall in the class I want to use it:



template<typename OSCall>
class FD{
public:
OSCall osCall_;
OSCall &GetOSCall(){return osCall_;}

read(){osCall_.read()}
write(){osCall_.write()}
}


Now if I want to use a mock to test my FD class I just need to pass my mock in the template paramenter and get the mock instance using GetOsCall.


Let's say I want to use the FD as a member of another class:



template<typename OSCall>
class User{
public:
DoSomething(){fd_.read();.......}
OSCall &GetMemberOSCall(){return fd_.GetOSCall()}
private:
FD<OSCall> fd_;
}


If I want to test the user with a mock, I can get the OSCall instance using the GetMemberOSCall, it works, but is it one of the best ways to do it? In the end I want to inject a mock to a class member and expect the return values of the member's mock. I hope I made myself clear.


Thanks


Log the thrown exception

I'm looking for a way to include the error message when the thrown exception is expected.


Here's my test:



describe('Process Text', function(){
_.each(shouldThrow, function(option){
it('throw error ('+option+')', function(){
expect(function(){
main.textValidation(option)
}).to.throw()
})
})
_.each(shouldNotThrow, function(option){
it('not throw error ('+option+')', function(){
expect(function(){
main.textValidation(option)
}).to.not.throw()
})
})
})


Here's the logged message from mocha:



✓ throw error (notify --message "Hello world" --open "https://www.holstee.com" --queue 15-04-30-16)
✓ throw error (--notify --message "Hello world" --open "https://www.holstee.com")
✓ throw error (--notify --message "Hello world" --open "https://www.holstee.com --queue "2015-04-30-16-10)
✓ not throw error (notify --message 'Hello world' --open 'https://www.holstee.com')
✓ not throw error (notify --message "Hello world" --open "https://www.holstee.com --queue 2015-04-30-16-10)


The desired output is something like this



✓ throw error
* option: notify --message "Hello world" --open "https://www.holstee.com" --queue 15-04-30-16
* error: invalid flag value `queue` should be 16 characters long
✓ throw error
* option: --notify --message "Hello world" --open "https://www.holstee.com"
* error: invalid parsed argument(s): notify
✓ throw error
* option: --notify --message "Hello world" --open "https://www.holstee.com --queue "2015-04-30-16-10
* error: invalid unparsed argument(s): 2015-04-30-16-22
✓ not throw error
* notify --message 'Hello world' --open 'https://www.holstee.com'
✓ not throw error
* notify --message "Hello world" --open "https://www.holstee.com --queue 2015-04-30-16-10


I tried this, did not work



describe('Process Text', function(){

beforeEach(function () {
test_name = this.currentTest.title;
});

_.each(shouldThrow, function(option){
it('throw error', function(){
test_name += "\n"
test_name += "* option:"
test_name += option
expect(function(){
try{
main.textValidation(option)
}catch(e){
test_name += "\n"
test_name += "* message:"
test_name += e.message
throw(e)
}
}).to.throw()
})
})
_.each(shouldNotThrow, function(option){
it('not throw error', function(){
test_name += "\n"
test_name += "* option:"
test_name += option
expect(function(){
main.textValidation(option)
}).to.not.throw()
})
})
})

How to convert integration tests to unit tests

I've been tasked with changing someone else's integration tests into unit tests. We have business objects that talk to the database. And so our tests currently reflect that. The problem is that I have code that calls the db directly within the method - and I want it to hit mock data instead of the db. How do you do that?



List<listOfStuff> listing = getDataFromDB(DBStuff); //this is what I want to
//not happen in my test.


I can't change the method, and I read something about wrapping the method in an interface, but I'm unsure of how do that...


Purposefully failing a JUnit test upon method completion

Background


I am working with a Selenium/Junit test environment and I want to implement a class to perform "soft asserts": meaning that I want it to record whether or not the assert passed, but not actually fail the test case until I explicitly tell it to validate the Asserts. This way I can check multiple fields on a page an record all of the ones which do not match.




Current Code


My "verify" methods appear as such (similar ones exist for assertTrue/assertFalse):



public static void verifyEquals(Object expected, Object actual) {
try {
assertEquals(expected, actual);
} catch (Throwable e) {
verificationFailuresList.add(e);
}
}


Once all the fields have been verified, I call the following method:



public static void checkAllPassed() {
if (!verificationFailuresList.isEmpty()) {
for (Throwable failureThrowable : verificationFailuresList) {
log.error("Verification failure:" + failureThrowable.getMessage(), failureThrowable);
// assertTrue(false);
}
}
}




Question


At the moment, I am currently just using assertTrue(false) as a way to quickly fail the test case; however, this clutters the log with a nonsense failure and pushes the real problem further up. Is there a cleaner way to purposefully fail a JUnit testcase? If not, is there a better solution to implement soft asserts? I know of an article which has a very well done implementation, but to my knowledge JUnit has no equivalent to the IInvokedMethodListener class


Javascript Unit test

I have a javascript app within 250 lines and want to add tests to that. Whenever i make a tiny change, i have to run tests for atleast 10 cases manually, which i want to automate.


I could have gone for frameworks as suggested in different posts, but the solution i want is minimum friction and codebase. Something like a single file for unit testing may work.


Is there a way for using JS testing without any frameworks ? I want to write Unit tests / functional tests both. Or if Frameworks is the only option, what frameworks are preferred in terms os ease of plugin to existing code + learning curve.


junit 4 test case to test rest web service

I have written a Restful Web service and have to test it using JUnit4. I have already written a Client using Jersey Client. But want to know if I can test my service only with junit4. Can someone help me with sample at least.


My rest service has authenticate method that takes user name, password and returns a token.


I have written test case for authenticate method. But I am not sure how to test using url.



public class TestAuthenticate {
Service service = new Service();
String username = "user";
String password = "password";
String token;

@Test(expected = Exception.class)
public final void testAuthenticateInputs() {
password = "pass";
service.authenticate(username, password);
}

@Test(expected = Exception.class)
public final void testAuthenticateException(){
username = null;
String token = service.authenticate(username, password);
assertNotNull(token);
}

@Test
public final void testAuthenticateResult() {
String token = service.authenticate(username, password);
assertNotNull(token);
}


}


How to unittest a function that writes xml?

I want to unittest a function that outputs data in XML format. A direct string comparison would not work obviously, because the order of the elements and the amount of whitespace do not matter.


Testing promises and sync functions that throw errors

I'm trying to build and test a function at the same time. Testing makes sense and I love it in theory, but when It comes down to it it always is a pain in the behind.


I have a function that takes a string and throws errors when something goes awry if all goes well it's going to return the original text argument and therefore a truthy value, if not it should be caught by the promise it's either in or itself as the promise.


This is the test / what I actually want to do (which doesn't work).



var main = require("./index.js")
var Promise = require("bluebird")
var mocha = require("mocha")
var chai = require("chai")
var chaiPromise = require("chai-as-promised")
chai.use(chaiPromise)

var shouldThrow = [
"random", // invalid non-flag
"--random", // invalid flag
"--random string", //invalid flag
"--wallpaper", // invalid flag w/ match
"--notify", // invalid flag w/ match
"wallpaper", // valid non-flag missing option(s) image
"wallpaper image.jpg" // invalid flag value
"wallpaper http://ift.tt/1G3L9LT", // invalid flag value
"wallpaper //cdn.shopify.com/s/files/1/0031/5352/t/28/assets/favicon.ico?12375621748379006621", // invalid flag value
"wallpaper http://ift.tt/1FecoQO", // invalid flag value
"wallpaper http://ift.tt/1G3L9LV", // invalid flag value
"wallpaper http://ift.tt/1Fecqs1", // invalid flag value
"wallpaper http://ift.tt/1G3L76O --queue", // invalid flag value
"wallpaper http://ift.tt/1G3L76O --queue "+moment().subtract(1, "month").format("YYYY-MM-DD-HH-mm"), // invalid flag value
"wallpaper http://ift.tt/1G3L76O --queue "+moment().add(1, "month").format("YY-MM-DD-HH"), // invalid flag value
"wallpaper --image http://ift.tt/1G3L9LT", // invalid flag value not https
"wallpaper --image //cdn.shopify.com/s/files/1/0031/5352/t/28/assets/favicon.ico?12375621748379006621", // invalid flag no protocol
"wallpaper --image http://ift.tt/1FecoQO", // invalid flag value not https
"wallpaper --image http://ift.tt/1G3L9LV", // invalid flag value not valid image
"wallpaper --image http://ift.tt/1Fecqs1", // invalid flag image not found
"wallpaper --image http://ift.tt/1G3L76O --queue", // invalid subflag queue missing value
"wallpaper --image http://ift.tt/1G3L76O --queue "+moment().subtract(1, "month").format("YYYY-MM-DD-HH-mm"), // invalid subflag queue date value is past
"wallpaper --image http://ift.tt/1G3L76O --queue "+moment().add(1, "month").format("YY-MM-DD-HH"), // invalid subflag queue date value format
"--wallpaper --image http://ift.tt/1G3L76O", //no action non-flag
"--wallpaper --image http://ift.tt/1G3L76O --queue "+moment().add(1, "month").format("YYYY-MM-DD-HH-mm"), //no action non-flag
"notify", // valid non-flag missing option(s) message, open
'notify --message "Hello world"', // valid flag missing params open
'notify --open "https://www.holstee.com"', // valid flag missing params message
'notify --message "Hello world" --open "http://www.holstee.com"', // invalid subflag value `open` should be https
'notify --message "Hello world" --open "https://www.holstee.com" --queue', // invalid subflag queue missing value
'notify --message "Hello world" --open "https://www.holstee.com" --queue '+moment().subtract(1, "month").format("YYYY-MM-DD-HH-mm"), // invalid subflag queue date value is past
'notify --message "Hello world" --open "https://www.holstee.com" --queue '+moment().add(1, "month").format("YY-MM-DD-HH"), // invalid subflag queue date value format
'--notify --message "Hello world" --open "https://www.holstee.com"', //no action non-flag
'--notify --message "Hello world" --open "https://www.holstee.com --queue "'+moment().add(1, "month").format("YYYY-MM-DD-HH-mm"), //no action non-flag
]

var shouldNotThrow = [
'notify --message "Hello world" --open "https://www.holstee.com"',
'notify --message "Hello world" --open "https://www.holstee.com --queue "'+moment().add(1, "month").format("YYYY-MM-DD-HH-mm"),
"wallpaper --image http://ift.tt/1G3L76O",
"wallpaper --image http://ift.tt/1G3L76O --queue "+moment().add(1, "month").format("YYYY-MM-DD-HH-mm"),
]

describe('Process Text', function(){
return Promise.each(shouldThrow, function(option){
it('throw error', function(){
return main.processText(option).should.throw()
})
})
return Promise.each(shouldNotThrow, function(option){
it('throw error', function(){
return main.processText(option).should.not.throw()
})
})
})


Here's a snapshot of the non-working* function I'm trying to test.



main.processText = function(text){
// general validation
var args = minimist(text.split(" "))
var argKeys = _.chain(args).keys().without("_").value()
var topLevelFlags = _.keys(flags)
var topLevelActions = _.chain(flags).keys().without("queue").value()
var allFlags = _.chain(flags).map(function(subFlags, key){
return subFlags.concat(key)
}).flatten().value()

var accidental = _.intersection(allFlags, args._)
var correct = _.map(accidental, function(flag){
return "--"+flag
})
if(accidental.length) throw new Error("non-flag data present / "+ accidental.join(",") + " should be: " + correct.join(","))
if(!args._.length) throw new Error("invalid non-flag data present")

var difference = _.difference(allFlags, argKeys)
var intersection = _.intersection(allFlags, argKeys)
var invalid = _.without.apply(_, [argKeys].concat(intersection))
if(intersection.length !== argKeys.length) throw new Error("invalid flags / "+ invalid)
var topLevelIntersection = _.intersection(topLevelActions, argKeys)
if(topLevelIntersection.length > 1) throw new Error("too many top-level flags")
if(args.wallpaper){
// wallpaer validation
var parsedUrl = url.parse(args.wallpaper)
if(!parsedUrl.hostname) throw new Error("hostname is missing, might be local file reference")
if(parsedUrl.protocol !== "https") throw new Error("image protocol should be https")
var fileExtension = path.extname(parsedUrl.path)
if(!_.contains([".png", ".jpg", ".jpeg"], fileExtension)) throw new Error("wallpaper image is invalid file type")
}else if(args.notify){
// notify validation
if(args.notify !== true) throw new Error("notify shouldn't have value")
var notifyIntersection = _.intersection(args.notify, flags.notify)
var missing = _.without.apply(_, [args.notify].concat(notifyIntersection))
if(missing) throw new Error("notify missing required param: "+ missing.join(","))
}
}


Note its not a promise and doesn't return any promises yet. One of the validation features I want is to check a if a url responds in a 200 status code, that's gonna be a request promise. If I update this function then does all of the function contents need to be nested within a Promise.resolve(false).then()? Perhaps the promise shouldn't be in this block of code and all async validation operations should exist somewhere else?


I don't know what I'm doing and I'm a little frustrated. I'm of course looking for some golden bullet or whatever that will make sense of all this.


Ideally I could use some help on how to test this kind of function. If I make it into a promise later on I still want all my tests to work.


Using Moq With Castle Windsor

I am trying to do a simple unit test of my home controller using Moq but I'm getting an exception of


An exception of type "'Castle.MicroKernel.ComponentNotFoundException' occurred in Castle.Windsor.dll but was not handled in user code. No component for supporting the service OrderTrackingSystem.Core.Repositories.IRepository was found" in my HomeController on the line "_repository = MvcApplication.Container.Resolve()".



private IRepository _repository;
_repository = MvcApplication.Container.Resolve<IRepository>();

public HomeController(IRepository repository)
{
_repository = repository;
}


Here is my unit test code.



[TestClass]
public class HomeControllerTests
{
private Mock<IRepository> _repositoryMock;

[TestInitialize]
public void TestSetup()
{
_repositoryMock = new Mock<IRepository>();
}

[TestMethod]
public void HomeControllerIndexReturnsAView()
{
// Arrange
var controller = new HomeController(_repositoryMock.Object);

// Act
var result = controller.Index() as ViewResult;

// Assert
Assert.IsNotNull(result);
}
}


I feel like I must be missing something simple with registering or setting up the repository in my unit test. Any ideas?


Roboguice CreationException in test

I'm new to Roboguice topic... I'm trying to correctly instantiate the injector inside the Application.onCreate() method. I tried in two ways:



  1. RoboGuice.overrideApplicationInjector(this, RoboGuice.newDefaultRoboModule(this), new SettingsModule());

  2. RoboGuice.getOrCreateBaseApplicationInjector(this, RoboGuice.DEFAULT_STAGE, Modules.override(RoboGuice.newDefaultRoboModule(this)).with(new SettingsModule()));


When running one of the AndroidTestCase I have I ended up with two different results depending on if I used 1 or 2:



  1. worked correctly

  2. thrown a com.google.inject.CreationException for this specific reason:



Could not find a suitable constructor in my.package.SharedPreferencesStorage. Classes must have either one (and only one) constructor annotated with @Inject or a zero-argument constructor that is not private.



The problem is that SharedPreferencesStorage has only one constructor annotated with @Inject, so the exception is quite misleading.


At this point I could go on using the overrideApplicationInjector() in my Application but according to the documentation it should be used only in testing. Is it correct? Is there some implication on using it outside a testing class? Why getOnCreateBaseApplicationInjector() doesn't work instead? Any hint is appreciated!


Angular-ui-tooltip unit testing (Jasmine)

I'm trying to test my directive (it calls angular tooltip when text truncates). Angular appends tooltip to body, so I want to trigger mouseover event and check body tag for 'div.tooltip'. But nothing happens, 'div.tooltip' doesn't append.


here some code:





beforeEach(angular.mock.inject(function($rootScope, $compile) {
el = angular.element(
"<div class='test' style='width:50px' kx-tooltip-if-truncated='{{text}}'></div>"
);
$body = $('body');
$body.append(el);
bodyScope = $rootScope.$new();
bodyScope.text = 'text 96874198796871987187118718';
$compile(el)(bodyScope);
bodyScope.$apply();
pTooltip = el.find('p[tooltip]');
scope = el.scope();
}));


it('tracks directive correct initialization', function() {
expect(pTooltip.text()).toEqual(bodyScope.text);
}); //this works fine, everything that needed appended (see in html section)


it('tests long-text append tooltip', function() {
el.find('p').trigger('mouseover');
console.log($body.find('div.tooltip')[0])//find nothing and this div doesn't appended

});



<div class="test ng-scope ng-isolate-scope" style="width:50px" kx-tooltip-if-truncated="text 96874198796871987187118718">
<p tooltip="text 96874198796871987187118718"
tooltip-placement="bottom"
tooltip-append-to-body="true"
tooltip-trigger="click"
class="u-text-truncate ng-scope">
text 96874198796871987187118718
</p>
</div>

<!--that's what my directive append--!>



Sinon Spy not working with Javascript call or apply

I'm trying to use sinon.spy to check if the play function is being called for a component. The problem is that the spy counter is not updating, even though I've confirmed that my component's function is indeed being called.


I've tracked it down to the use of Javascript's call function:



handleDone: function(e) {
for (var i = 0; i < this.components.length; i++) {
if (this.components[i].element === e.target) {
if (this.components[i].hasOwnProperty("play")) {
// This won't trigger the spy.
this.components[i]["play"].call(this.components[i].element);
}
break;
}
}
}


A similar thing happens when swapping call for apply.


Anyone know of a workaround?


Thx.


Reshaper run unit test from specified folder but missing application configuration file

When I run unit test from a specified folder I realized that the application configuration file is not used as it is simply not there. What is the idea here? Should the application configuration file be copied by reshaper (which means I might have found a bug). Or should I copy the file there with a post-build event or something like that? What would be a clean solution for this?


That really no application configuration file is loaded can be confirmed with Fuslogvw.exe for example. I am using Resharper 9.0.


Unit Testing for Marklogic

We are looking for a framework to test our Marklogic XQuery code. We can see Marklogic/xqunit is a good framework, but it does not have code coverage feature. What is the best framework to write unit test cases for Marklogic XQuery?


How to mock an Elasticsearch Java Client?

Do you know how to propertly mock the Elasticsearch Java Client? Currently to mock the following request in Java:



SearchResponse response = client.prepareSearch(index)
.setTypes(type)
.setFrom(0).setSize(MAX_SIZE)
.execute()
.actionGet();
SearchHit[] hits = response.getHits().getHits();


I have to mock:



  • client.prepareSearch

  • SearchRequestBuilder:

    • builder.execute

    • builder.setSize

    • builder.setFrom

    • builder.setTypes



  • SearchResponse:

    • action.actionGet



  • SearchResponse:

    • response.getHits

    • searchHits.getHits




So my test looks like:



SearchHit[] hits = ..........;

SearchHits searchHits = mock(SearchHits.class);
when(searchHits.getHits()).thenReturn(hits);

SearchResponse response = mock(SearchResponse.class);
when(response.getHits()).thenReturn(searchHits);

ListenableActionFuture<SearchResponse> action = mock(ListenableActionFuture.class);
when(action.actionGet()).thenReturn(response);

SearchRequestBuilder builder = mock(SearchRequestBuilder.class);
when(builder.setTypes(anyString())).thenReturn(builder);
when(builder.setFrom(anyInt())).thenReturn(builder);
when(builder.setSize(anyInt())).thenReturn(builder);
when(builder.execute()).thenReturn(action);

when(client.prepareSearch(index)).thenReturn(builder);


Ugly... So I would like to known if there is a more "elegant way" to mock this code.


Thanks


Coded UI: Not able to uses CSV file in script in TFS solution explorer while it is working fine on the local machine.

I have done the below steps, it is working fine on my local machine but when i worked with TFS solution explore below errror


Steps: 1. Created data.csv file. 2. Advance save as a unicode (utf-8 without signature Codepage-65001). 3. Make data.csv file as copy if new


Error: The character encoding for the file D:\Testcase\data.csv has changed. Your source control provider may have problems managing files with this type of encoding. For example, if you save an ANSI-encoded file as UTF-8 you may not be able to merge or show differences.


Code: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\data.csv", "data#csv", DataAccessMethod.Sequential), DeploymentItem("data.csv"), TestMethod] public void CodedUITestMethod1() { Console.WriteLine(TestContext.DataRow["firstname"].ToString()); // To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items. }


Any benefits for writing unit test for .NET code in Iron Python instead of C#?

So far in our organization we have used C# for unit testing .Net code. We are trying to evaluate replacing C# with Iron Python as a language of choice for writing unit tests. We are not looking into this necessarily because Python code may be more easy to work with for small blocks of code like a typical test is, but with the hope of creating more maintainable unit tests. First of all I would expect setting up stubs, mocks and fakes to be more easy to do in a dynamic language. I was wondering if others have tried a similar approach? Is is worth making the change? For example if we will just end up just rewriting C# code in Python I see no real benefit. Is there any testing framework that you would recommend for this scenario?


Unit Testing Django Rest Framework Authentication at Runtime

I basically want to turn TokenAuthentication on but only for 2 unit tests. The only option I've seen so far is to use @override_settings(...) to replace the REST_FRAMEWORK settings value.



REST_FRAMEWORK_OVERRIDE={
'PAGINATE_BY': 20,
'TEST_REQUEST_DEFAULT_FORMAT': 'json',
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
'rest_framework_csv.renderers.CSVRenderer',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.TokenAuthentication',
),
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
}

@override_settings(REST_FRAMEWORK=REST_FRAMEWORK_OVERRIDE)
def test_something(self):


This isn't working. I can print the settings before and after the decorator and see that the values changed but django doesn't seem to be respecting them. It allows all requests sent using the test Client or the DRF APIClient object through without authentication. I'm getting 200 responses when I would expect 401 unauthorized.


If I insert that same dictionary into my test_settings.py file in the config folder everything works as expected. However like I said I only want to turn on authentication for a couple of unit tests, not all of them. My thought is that Django never revisits the settings for DRF after initialization. So even though the setting values are correct they are not used.


Has anyone run into this problem and found a solution? Or workaround?


I need a sample of using MbUnit.Framework.TestCase

How can I use the MbUnit.Framework.TestCase in the MbUnit framework? When I can to use NUnit, then I write such code:



using Fw = NUnit.Framework;
[Fw.TestFixture, Fw.Apartment(ApartmentState.STA)]
public class Tests {
[Fw.Test]
[Fw.Category("Some category name")]
[Fw.TestCase(@".\data-for-testing\data_01.dwg")]
[Fw.TestCase(@".\data-for-testing\data_02.dwg")]
[Fw.TestCase(@".\data-for-testing\data_03.dwg")]
[Fw.TestCase(@".\data-for-testing\data_04.dwg")]
public void MyTestCases(String dwgFileName) {
// Here is code of my test
}
}


But sometimes I can't use NUnit (unfortunately) and MbUnit framework only is possible (for AutoCAD older than AutoCAD 2011). I need use MbUnit.Framework.TestCase in MbUnit, but I don't understand how to do it right. I can't find any samples of its using. Also, MbUnit.Framework.TestCase is not attribute.


Combine output from py.test and boost unit_test for Jenkins/xunit with CTest

I have a CMake environment with CTest which currently generates a boost/unit_test binary and like described here it's being called like this:



test_exe --log_format=XML --log_sink=results.xml


to generate output which can be handled by the Jenkins' xUnit-plugin.


I now want to integrate add Python tests using py.test and of course I want to have detailed test results on the Jenkins dashboard, too.


This post suggests it's possible to provide Jenkins with more than one result files which would currently my strategy.


But since I'm using CTest - isn't there a way to let CTest interpret nested test results? What if I'm having separate tests without boost/unit_test which I had to run via ctest -T Test to get output Jenkins can handle?


What's the recommended way to configure such a test environment?


TestNG not waiting till my Spring session initialised

I have some sort of following code and if I run it with single test tag xml it works fine. But, when I have multiple tests suites it's simply skips all tests and run last one.


I know that to init Spring takes much time , but how could I wait till my session get init in @BeforeSuite(alwaysRun=true) ?


Note :- Thread.sleep(); also not my solution :(



@BeforeSuite(alwaysRun=true)
public void contextSetup() throws Exception {
start = new Date();
spContext = createContext();
importStandardConfiguration(spContext);
}

@BeforeTest(alwaysRun=true)
@Parameters({"testConfigFile"})
public void testContextSetUp(String fileName) throws Exception{
loadTestContext(fileName);
_skippedTests = new ArrayList<String>();
}

@AfterTest
public void releaseContext() throws GeneralException{
testContext = null;
universalMap.clear();
}

@AfterSuite
public void releaseSpContext() throws GeneralException{
if(spContext!=null)SPFactory.releaseContext(spContext);
throw new GeneralException("Encountered leaked Context");
}

@Test(dataProvider="DataFile")
public void testConnectorImplementation(Map parameter) throws Exception { }


And my TestNG file look like this



<!DOCTYPE suite SYSTEM "http://ift.tt/1ue4P4h" >
<suite name="My Tests Suite" verbose="0">

<test name="First Test">
<parameter name="testConfigFile"
value="config/MyFirstTestConfig.xml" />
<classes>
<class name="com.testframework.ConnectorTests">
</class>
</classes>
</test>

<test name="Second Test">
<parameter name="testConfigFile"
value="config/MySecondTestConfig.xml" />
<classes>
<class name="com.testframework.ConnectorTests">
</class>
</classes>
</test>
</suite>

Not able to mock urllib2.urlopen using Python's mock.patch

Below is a code snippet of my api.py module



# -*- coding: utf-8 -*-

from urllib2 import urlopen
from urllib2 import Request

class API:

def call_api(self, url, post_data=None, header=None):
is_post_request = True if (post_data and header) else False
response = None
try:
if is_post_request:
url = Request(url = url, data = post_data, headers = header)
# Calling api
api_response = urlopen(url)
response = api_response.read()
except Exception as err:
response = err

return response


I am trying to mock urllib2.urlopen in unittest of above module. I have written



# -*- coding: utf-8 -*-
# test_api.py

from unittest import TestCase
import mock

from api import API

class TestAPI(TestCase):

@mock.patch('urllib2.Request')
@mock.patch('urllib2.urlopen')
def test_call_api(self, urlopen, Request):
urlopen.read.return_value = 'mocked'
Request.get_host.return_value = 'google.com'
Request.type.return_value = 'https'
Request.data = {}
_api = API()
_api.call_api('https://google.com')


After I run the unittest, I get an exception



<urlopen error unknown url type: <MagicMock name='Request().get_type()' id='159846220'>>


What am I missing? Please help me out.


Mockito Allow different argument types to mock overloaded method

for JUnit testing I want to mock an overloaded method. There is no need to implement several methods in the mockbuilder though. I want to do something like this:



Mockito.when(mock.getSomeInfo(Mockito.any(ArgumentType1.class) OR Mockito.any(ArgumentType2.class), Mockito.any(ArgumentType3.class))).then(new Answer<AnswerType>() {..}


I know it doesn't work with the OR statement, but is there another way to do this in Mockito?


Thanks in advance!


Scrutinizer and unit-testing with Symfony2

actually this question is not about programming, its about code organization and quality.


I use Scrutinizer to check quality of my code. In my project I get error about dupication of setUp() method in my unit test clasess. Here is both methoods:


First:



/**
* Boot application for testing import command
*/
public function setUp()
{
$kernel = $this->createKernel();
$kernel->boot();
$application = new Application($kernel);
$application->add(new ImportCommand());
$command = $application->find('uber:translations:import');
$this->commandTester = new CommandTester($command);
}


Second:



/**
* Boot application for testing purge memcache command
*/
public function setUp()
{
$kernel = $this->createKernel();
$kernel->boot();
$application = new Application($kernel);
$application->add(new PurgeCommand());
$command = $application->find('uber:translations:purge');
$this->commandTester = new CommandTester($command);
}


Yes, the looks similar, but I use different commands. How I can DRY my code? Can somebody give some advice to me? Thanks!


Mokito mail API testing Issue

jvnet.mock_javamail.Mailbox; for testing email But the email goes to actual id agniest the memory Mailbox


Here is my code



@Before
public void init() {
messageTemplateService =mock(MessageTemplateService.class);
props=mock(Properties.class);
emailProperties=mock(EmailProperties.class);
//emailProperties.setP
emailService.setEmailProperties(emailProperties);
emailService.setMessageTemplateService(messageTemplateService);
Mailbox.clearAll();
}

@Test
public void testSend() throws MessagingException, IOException {
//Mailbox.clearAll();
when(emailProperties.getAdminTo()).thenReturn("pritam.pritam176@gmail.com");
when(emailProperties.getSenderEmail()).thenReturn("pritam.pkm1989@gmail.com");
when(emailProperties.getSender()).thenReturn("pritam");
//Mailbox mailbox = Mailbox.get(emailProperties.getAdminTo());
//EmailServiceImpl emailService = new EmailServiceImpl();

Properties props =new Properties();
props.put("mail.smtp.host", "smtp.gmail.com");
props.put("mail.smtp.user", "pritam.pritam176@gmail.com");
props.put("mail.smtp.password", "9439586575");
props.put("mail.smtp.port", 587);
props.put("mail.smtp.auth", true);
props.put("mail.debug", true);

when(emailProperties.getProps()).thenReturn(props);
when(emailProperties.getHost()).thenReturn("smtp.gmail.com");
when(emailProperties.getUserName()).thenReturn("pritam.pritam176@gmail.com");
when(emailProperties.getPassword()).thenReturn("9439586575");

MessageTemplate temp= new MessageTemplate();
temp.setBody("Your Friend ${friend} want to see this link");
temp.setSubject("Link");
when(messageTemplateService.getMessageTemplateById("2")).thenReturn(temp);
MessageTemplate mTemplate=messageTemplateService.getMessageTemplateById("2");



String to=emailProperties.getAdminTo();
String from = emailProperties.getSenderEmail();
String subject=mTemplate.getSubject();
String contentType= "text/html";
String body=mTemplate.getBody();

emailService.sendEmail(to, subject, body, contentType);
Mailbox mailbox = Mailbox.get(emailProperties.getAdminTo());
assertEquals(1, mailbox.getNewMessageCount());
assertFromEquals(mailbox.get(0), from);
assertToEquals(mailbox.get(0), emailProperties.getAdminTo());
assertSubjectEquals(mailbox.get(0), "Subject: testSend");
assertBodyEquals(mailbox.get(0), "Body: testSend"); }



  • what am i did wrong??Please help me Mailbox mailbox = Mailbox.get(emailProperties.getAdminTo()); line should return 1 but it return 0.


Python properties and unittest TestCase

Today I wrote test and typoed in one of test methods. My tests failed but I don't understand why. Is it special behaviour of Python properties or something else?



from unittest import TestCase


class FailObject(object):
def __init__(self):
super(FailObject, self).__init__()
self.__action = None

@property
def action(self):
return self.__action

@action.setter
def action(self, value):
self.__action = value


def do_some_work(fcells, fvalues, action, value):
currentFailObject = FailObject()
rects = [currentFailObject]
return rects


class TestModiAction(TestCase):
def testSetFailObjectAction(self):
rect = FailObject # IMPORTANT PART
rect.action = "SOME_ACTION" # No fail!
self.assertEquals("SOME_ACTION", rect.action)

def testSimple(self):
fcells = []
fvalues = []
rects = do_some_work(fcells, fvalues, 'act', 0.56)

rect = rects[0]
self.assertEquals('act', rect.action)


When I run this testcase with nose tests:



.F
======================================================================
FAIL: testSimple (test.ufsim.office.core.ui.cubeeditor.TestProperty.TestModiAction)
----------------------------------------------------------------------
Traceback (most recent call last):
File "TestProperty.py", line 36, in testSimple
self.assertEquals('act', rect.action)
AssertionError: 'act' != 'SOME_ACTION'

----------------------------------------------------------------------
Ran 2 tests in 0.022s

FAILED (failures=1)


If I fix typo with instance creation in testSetFailObjectAction all tests are work as expected. But this example turn me back to question: Is it safe to use properties? What if I will typo again some day?


Software Testing Query

I have a query related to smoke testing. Do we perform smoke testing only at the Integration level or is it applicable at all level of testing(Unit, Integration , System, UAT). Please help me to solve this query


javascript expect.toBe with multiple values

I have problem related to daylight saving time. I have javascript jasmine test, where I test that opening time is correct. The opening times are stored in GMT-time, because they are gotten from backend api. The problem is, that the correct opening times cannot be tested with expect.toBe(certain_hour), because now when the daylight-saving went off, the opening hours won't be the same. Maybe it is stupid to try to store opening hours in GMT time anyway, since then the actual opening hour changes. But, how could I test the expect.toBe with multiple values? For now, I could test that expect.toBe(hour_one || hour_two), but that is not supported?


lundi 30 mars 2015

Unit Testing Generic Unit of Work and Repository Pattern framework using Moq

I'm at my wits end. I'm learning how to use the Generic Unit of Work and Repository pattern framework (http://ift.tt/JcIZhn). I've got no problem setting up the controllers, unity, and views... they all work on live data. My issue is unit testing these async repositories.


I've came across numerous posts here in stackoverflow and articles in MSDN with regards to mocking the DataContext using Moq (http://ift.tt/1ILhuoW).


However, upon executing the tests, I seem to be facing a roadblock and I have no idea how to fix this. Please bear with me.


Here is the controller I'm testing:



public class TeamsController : Controller
{
private readonly IUnitOfWorkAsync _uow;
private readonly IRepositoryAsync<Team> _repo;

public TeamsController(IUnitOfWorkAsync uow)
{
_uow = uow;
_repo = _uow.RepositoryAsync<Team>();
}

// GET: Teams
public async Task<ViewResult> Index()
{
return View(await _repo.Queryable().ToListAsync());
}
}


Here is the unit test:



[TestMethod]
public async Task Index_AccessIndexPage_MustPass()
{
// arrange
var data = new List<Team>
{
new Team { Id = 1 }
}.AsQueryable();

Mock<DbSet<Team>> mockSet = data.GenerateMockDBSet<Team>();
var mockContext = new Mock<IDataContextAsync>();
mockContext.As<IDBContext>().Setup(c => c.Teams).Returns(mockSet.Object);

_uow = new UnitOfWork(mockContext.Object);

// act
_controller = new TeamsController(_uow);
var result = await _controller.Index();
var model = (List<Team>)((ViewResult)result).Model;

// assert
Assert.IsNotNull(model);
Assert.AreEqual(model.Count, 2);
}


Here is the utility I got from MSDN:



public static Mock<DbSet<TEnt>> GenerateMockDBSet<TEnt>(this IQueryable<TEnt> data)
where TEnt : Entity
{
var mockSet = new Mock<DbSet<TEnt>>();
mockSet.As<IDbAsyncEnumerable<TEnt>>()
.Setup(m => m.GetAsyncEnumerator())
.Returns(new TestDbAsyncEnumerator<TEnt>(data.GetEnumerator()));

mockSet.As<IQueryable<TEnt>>()
.Setup(m => m.Provider)
.Returns(new TestDbAsyncQueryProvider<TEnt>(data.Provider));

mockSet.As<IQueryable<TEnt>>().Setup(m => m.Expression).Returns(data.Expression);
mockSet.As<IQueryable<TEnt>>().Setup(m => m.ElementType).Returns(data.ElementType);
mockSet.As<IQueryable<TEnt>>().Setup(m => m.Provider).Returns(data.Provider);
mockSet.As<IQueryable<TEnt>>().Setup(m => m.GetEnumerator()).Returns(data.GetEnumerator);

return mockSet;
}


Here is the actual exception from the unit test:



Test method MyMVC.Tests.Controllers.TeamsControllerTest.Index_AccessIndexPage_MustPass threw exception:
System.ArgumentNullException: Value cannot be null.
Parameter name: source
Result StackTrace:
at System.Data.Entity.Utilities.Check.NotNull[T](T value, String parameterName)
at System.Data.Entity.QueryableExtensions.ToListAsync[TSource](IQueryable`1 source)


The exception is fired during the actual .Queryable() call because the IRepositoryAsync _repo seems to be throwing a null.


Can anyone help?


Thank you.


Disable Symfony2 SwiftMailer spooling for unit tests

I need to completely disable the SwiftMailer spooling functionality for some of my unit tests.


I have methods that implement application-specific functionality on top of SwiftMailer and I have unit tests for them.


Unfortunately, it appears that SwiftMailer's listener that sends mail spooled to memory at the end of a request is not run during unit testing.


This means that messages spooled to memory are lost. If I spool to a file then I have to manually run the console/swiftmailer:spool:send command. Yes. I know that I could run that command from within my test, but that really doesn't seem very clean and is subject to failure if the syntax of the send command is ever changed.


I have tried removing the swiftmailer.spool configuration from config.yml and specifying it only in config_dev.yml and config_prod.yml, leaving it out of config_test.yml. This has had no effect.


In fact, I have been utterly unable to get rid of the default spool configuration.


I have been using the console config:debug swiftmailer --env=[whetever] to test after each change and the spool configuration is always there with type:memory unless I explicitly set the type to file.


Suggestions? Many thanks.


Testing functions that throw exceptions

I'm using tape and I'm trying to test a couple of functions. My functions throw errors and validate objects. I like throwing errors because then later my promises can catch them. I'm trying to run simple tests and establishing the data argument in all the scenarios to hit each error in the stack. How can I test this function without putting it in a try / catch every time? I see there's two functions in the API t.throws() and t.doesNotThrow() , I've tried them both and even added the extra params like t.throws(myFunc({}), Error, "no data") but nothing seems to work as expected.



var test = require('tape')
var _ = require('underscore')

function myFunction(data){
if(!data) throw new Error("no data")
if(_.size(data) == 0) throw new Error("data is empty")
if(!data.date) throw new Error("no data date")
if(!data.messages.length == 0) throw new Error("no messages")
data.cake = "is a lie"
return data
}

test("my function", function(t){
t.throws(myFunction({}))
t.end()
}

Finding .NET code covered exclusively by a set of tests

For a suite of .NET tests (MSTest), is there a way to find code blocks that are covered exclusively by a particular subset of the tests (i.e. a single TestClass?)


Here's my scenario: I'm working in an old code base which has built up a large and slow suite of unit and integration tests over time. We'd like to reduce the overall runtime of the suite. One approach is reducing redundancy between integration tests and unit tests. In fact, there are likely integration tests that are completely redundant to some unit tests.


Here's what I'd like to do:



  1. Collect code coverage over the full suite of unit and integration tests.

  2. Find integration tests which don't cover any blocks not covered by other tests.

  3. Manually validate that the reported tests are completely redundant, and if so, remove them.


Our tests are written in MSTest and run using Visual Studio. I'm familiar with collecting code coverage, but I'm not sure how to query through it.


Get data structure of all tests found by Nose

How would I get some sort of data structure containing a list of all tests found by Nose? I came across this:


List all Tests Found by Nosetest


I'm looking for a way to get a list of unit test names in a python script of my own (along with the location or dotted location).


Define environment variables in phpunit without using xml file

My folder structure:



tests/
Controller/
Library/
Model/
Seeds/
bootstrap.php
phpunit.xml


I have an env.php file that defines some variables like so:



define('TESTING',true);


but in my test files:



$this->assertTrue(defined(TESTING));


ends up being false.


My phpunit.xml file:



<?xml version="1.0" ?>
<document>
<phpunit
backupGlobals="false"
backupStaticAttributes="false"
processIsolation="true"
bootstrap="./bootstrap.php"
>
<testsuites>
<testsuite name="Main">
<directory>./</directory>
</testsuite>
<testsuite name="Library">
<directory>./Library/</directory>
</testsuite>
<testsuite name="Model">
<directory>./Model/</directory>
</testsuite>
<testsuite name="Controller">
<directory>./Controller/</directory>
</testsuite>
</testsuites>
</phpunit>
</document>


My bootstrap.php file requires my env.php file. So, why are the environment variables not accessible in my tests? I don't want to define all of my environment variables in my phpunit.xml file, since they are already defined in my env.php (DRY - right? ;) I have also tried using getenv(TESTING); with no success.


Thanks in advance!


How to test 2 methods with common logic?

Let's say that I have 2 public methods:



func didSelect(data: Data) {
// do something

self.view.showText(textForData(data))
}

func didDismiss(data: Data) {
if data.isSomething {
self.view.showText(textForData(data))
}

...
}

private func textForData(data: Data): String {
var text: String

if data.distance == nil {
text = "..."
} else if data.distance < 1000 {
text = "\(data.distance) m"
} else {
text = "\(data.distance / 1000) km"
}

return text
}


Both of them depend on the formatting logic of textForData.


textForData has (with this minimized implementation) 3 possible cases. If I do test every possible case for both of my public functions, I'll end up with 6 test methods, and 3 of them would also be testing the same logic that was already tested by the other 3.


What's the proper way of testing this?


Ps.: I could write a separate test for textForData and in the tests for the public methods I assert that the textForData is called, but that seems to break the encapsulation of my class and I don't want to make the testForData public. I also wouldn't like to make a separate class just for my textForData logic, because I would end up creating too many dependencies for this current class, and that logic doesn't seem to fit anywhere else besides in this class.


Angular testing, mocking rootscope

I am trying to mock the rootScope.$id property and I am doing something incorrectly that I can't quite put my finger on. So I am using karma/jasmine and sinon for the stubbing . Here's what I have -


uptop - defining the mock



var mockRootScope = {
$id: sinon.stub()
};


Then in the before each



angularMocks.module(function($provide) {
$provide.value('$rootScope', mockRootScope);
});


And trying to set it in the unit-test like so



mockRootScope.returns({
$id: '1'
});


Doesn't seem to be mocking it up correctly - Any idea what I'm doing wrong here? Thanks!


Proper Unit Testing Philosophy

What would be the proper thing to do for each case?


1: Context: Testing a function that creates a database as well as generating metadata for that database


Question: Normally unit test cases are supposed to be independent, but if we want to make sure the function raises an exception when trying to make a duplicate database, would it be acceptable to have ordered test cases where the first one tests if the function works, and the second one tests if it fails when calling it again?


2: Most of the other functions require a database and metadata. Would it be better to call the previous functions in the set up of each test suite to create the database and metadata, or would it be better to hard code the required information in the database?


How can I structure a mocha unit test that involves promises and third party NPM modules?

The code I am trying to test is:



exports.hasTokenOrApi = (req, res, next) ->
if not req.headers?.authorization
return res.status(403).end()

new Promise (resolve, reject) ->
if req.headers.authorization.length is 32
# We are using an API key
global.db.User.find
where:
api_key: req.headers.authorization
.then (dbUser) ->
resolve dbUser.apiDisplay()
else
# We are using a redis session
req.redisSession.getAsync
app: 'sessions'
token: req.headers.authorization
.then (response) ->
resolve response.d
.then (user) ->
if not user.id
return res.status(403).end()
req.user = user

next()
.catch (err) ->
next err


This is a middleware (I'm using Express) to catch tokens or API keys for various API endpoints.


So far the tests I have are:



describe 'Authentication Middleware', ->
mock_res = {}
before (done) ->
mock_res =
status: ->
@
end: ->
@

global.db =
User:
find: ->
@
then: ->
id: 1


done()

it 'should return a 403 is no authorization is set in the header', ->
mock_req = {}
mock_next = null

status_spy = sinon.spy mock_res, 'status'
end_spy = sinon.spy mock_res, 'end'

authentication.hasTokenOrApi mock_req, mock_res, mock_next
status_spy.calledWith(403).should.equal true
end_spy.called.should.equal true

it.only 'should detect a valid API key', ->
mock_req =
headers:
authorization: 'e16b2ab8d12314bf4efbd6203906ea6c'
mock_next = sinon.spy()

authentication.hasTokenOrApi mock_req, mock_res, mock_next
mock_next.called.should.equal true


The first test is fine, works great, solves all of my problems. The second one isn't working properly. I assume it has something to do with the Promises? My test us returning false, when what I'm trying to do is true


Any help would be GREATLY appreciated!


Microsoft Unit Tests - Data source cannot be found in the test configuration settings

I am trying to build Test class of data-driven unit tests in C#. I want to use 3 databases: one from SQL, one from Access and one from Excel. This is my app.config file:



<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="microsoft.visualstudio.testtools"
type="Microsoft.VisualStudio.TestTools.UnitTesting.TestConfigurationSection,
Microsoft.VisualStudio.QualityTools.UnitTestFramework,
Version=10.0.0.0,
Culture=neutral"/>
</configSections>
<connectionStrings>
<add name="MyJetConn"
connectionString="Provider=Microsoft.ACE.OLEDB.12.0;
Data Source=H:\SQA\CoolMath\CoolMath\Database1.accdb;
Persist Security Info=False;"
providerName="System.Data.OleDb" />
<add name="MyExcelConn"
connectionString="Dsn=Excel Files;
dbq=H:\SQA\CoolMath\CoolMath\CoolMathExcelDataTable.xlsx;
defaultdir=.;
driverid=1046;
maxbuffersize=2048;
pagetimeout=5"
providerName="System.Data.Odbc" />
<add name="MSSQLConn"
connectionString="Data Source=H:\SQA\CoolMath\CoolMath\SQLExpress;
Initial Catalog=MSSQLDB;
Integrated Security=SSPI;"
providerName="System.Data.SqlClient" />
</connectionStrings>
<microsoft.visualstudio.testtools>
<dataSources>
<add name="MyJetDataSource"
connectionString="MyJetConn"
dataTableName="CoolMathAcessDataTable"
dataAccessMethod="Sequential"/>
<add name="MyExcelDataSource"
connectionString="MyExcelConn"
dataTableName="Sheet1$"
dataAccessMethod="Sequential"/>
<add name="MSSQLDataSource"
connectionString="MSSQLConn"
dataTableName="dbo.CoolMathDataTable"
dataAccessMethod="Sequential"/>
</dataSources>
</microsoft.visualstudio.testtools>
</configuration>


When I try to run the tests, they all fail with the message: "Data source cannot be found in the test configuration settings". I can't see what am I doing wrong, perhaps it is the location of the databases? (they all in the same library as the code project and the XML file). Thank for all the helpers!


Some of your tests did a full page reload - error when running Jasmine tests

I'm running into an issue where when I run my tests on Jasmine, I get this error below. The problem is, it seems to happen when I try to execute a certain amount of tests. It doesn't seem to be tied to a particular test, as if I comment out some, the tests pass. If I uncomment some tests, the error appears. If I comment out ones that were uncommented before, they all pass again. (ie if I have red, green, blue and orange test and it fails, I comment out orange and blue it passes, then I uncomment blue and orange it fails again, but if I comment out red and green it passes again).


Chrome 41.0.2272 (Mac OS X 10.10.1) ERROR Some of your tests did a full page reload! Chrome 41.0.2272 (Mac OS X 10.10.1): Executed 16 of 29 (1 FAILED) ERROR (0.108 secs / 0.092 secs)


I'm stumped as to what is going on. The more tests I add, that's when this becomes an issue. Has anyone encountered this before? I have no idea what could be causing it, as nothing in any of my tests do any kind of redirection, and they all pass universally on another persons machine.


How to inject into Gradle Unit Test Scope (Android, Dagger)

I am using the new unit testing feature in the Gradle 1.1 Android plugin. Let's say I have a JUnit Test like this:



public class GlossaryItemJsonTest {
@Inject
Gson gson; //this is not getting injected, so it's null


@Test
public void testDeserialization() throws Exception {
//...
}
}


How can I inject from my main scope into my test scope? I do not want to duplicate any code from my main DataModule. Traditionally I would add an injects=GsonTest.class field to the @Module annotation, but the main DataModule cannot see any classes from the test scope.



@Module(
injects = GsonTest.class //does not compile
complete = false,
library = true
)
public class DataModule {
@Provides @Singleton provideGson(){...}
}


How do inject my gson variable in the Gradle test scope?


Running Android Studio Unit Tests from Command Line/Terminal

[This may be an obvious one]


I am moving all my eclipse based android projects to Android Studio with gradle. I have moved my unit tests to Android studio.


How do I run the unit tests from terminal/command line? I need to automate my testing in Jenkins. Any pointers/suggestions would help.


Trouble with Mock.Assert() for sequential calls with different argument values to mock

Could someone please take a look at the demo code below and let me know if what I'm seeing is due to error on my part or a Telerik issue?


I'm using Telerik.JustMock v. 2014.1.1519.1. and Microsoft.VisualStudio.QualityTools.UnitTestFramework v. 10.0.0.0.


As the code comments note, I get the expected results when the id variables are equal (one call for each of the ids), but not when they're different. When I step through the first test I can see the expected calls being made, but JustMock then tells me they weren't made.


I'll appreciate any constructive thoughts. Hopefully this isn't a case of me not getting enough sleep...



[TestClass]
public class RunnerTests
{
[TestMethod]
public void MakeTwoCallsDifferentIdsFails()
{
int idOne=1;
int idTwo=2;

DataTable dt=new DataTable();
dt.Columns.Add("Id");
dt.Rows.Add(idOne);
dt.Rows.Add(idTwo);

IProcessor mock = Mock.Create<IProcessor>();
Runner runner = new Runner(mock);
runner.Process(dt);

Mock.Assert(()=>mock.Process(Arg.IsAny<MyArgs>()), Occurs.Exactly(2));
//The following two asserts fail (with 0 calls made to mock), regardless of sequence:
Mock.Assert(()=>mock.Process(Arg.Matches<MyArgs>
(d=>d.Id==idOne)),Occurs.Once());
Mock.Assert(()=>mock.Process(Arg.Matches<MyArgs>
(d=>d.Id==idTwo)),Occurs.Once());
}

[TestMethod]
public void MakeTwoCallsSameIdPasses()
{
//ids intentionally equal:
int idOne=1;
int idTwo=1;

DataTable dt=new DataTable();
dt.Columns.Add("Id");
dt.Rows.Add(idOne);
dt.Rows.Add(idTwo);

IProcessor mock = Mock.Create<IProcessor>();
Runner runner = new Runner(mock);
runner.Process(dt);

//all asserts pass:
Mock.Assert(()=>mock.Process(Arg.IsAny<MyArgs>()), Occurs.Exactly(2));
//The following two pass:
Mock.Assert(()=>mock.Process(Arg.Matches<MyArgs>
(d=>d.Id==idOne)),Occurs.Exactly(2));
Mock.Assert(()=>mock.Process(Arg.Matches<MyArgs>
(d=>d.Id==idTwo)),Occurs.Exactly(2));
}
}

public interface IProcessor
{
void Process(MyArgs args);
}

public class MyArgs
{
public void UpdateId(int newId)
{
this.Id = newId;
}

public int Id {get; private set;}
}

public class Runner
{
private IProcessor processor;

public Runner(IProcessor processor)
{
this.processor=processor;
}

public void Process(DataTable dt)
{
MyArgs args = new MyArgs();

foreach(DataRow row in dt.Rows)
{
int id = Convert.ToInt32(row["Id"]);
args.UpdateId(id);
processor.Process(args);
}
}
}


EDIT: In the test method that fails, if I completely remove one of the int variables and explicitly assert that the other was called exactly once, the test passes. Things seem to go south only when I throw that second, different value into the mix.


How to set up code coverage and unit tests for express functions?

I have a route defined like


app.post '/v1/media', authentication.hasValidApiKey, multipart, mediaController.create, mediaController.get


I want to write tests for the individual components of the route. So starting with authentication.hasValidApiKey, that's a function defined in another file:



exports.hasTokenOrApi = (req, res, next) ->
if not req.headers.authorization
return res.status(403).end()

doOtherStuff...


In my test, I have:



authentication = require '../src/middlewares/authentication'

describe 'Authentication Middleware', ->
before (done) ->
done()

it 'should check for authentication', (done) ->
mock_req = null
mock_res = null
mock_next = null

authentication.hasTokenOrApi mock_res, mock_req, mock_next
done()


How do I deal with the req, res and next? And how can I setup code coverage to run? I am running my tests with: export NODE_ENV=test && ./node_modules/.bin/mocha --compilers coffee:'./node_modules/coffee-script/lib/coffee-script/register'


Mocking classes that created with app()->make

I have this code in my __construrct:



public function __construct(Guard $auth)
{
$this->auth = $auth;
$this->dbUserService = app()->make('DBUserService');
}


Now, when i'm unit testing i know that i can mock Guard and pass it's mock to $auth, but how can i mock dbUserService ? it's instantiated trough the IoC container.


java writing ignore test cases

I am studing how to create java test cases. In internet I saw two structures:



public class XXX {
@Test
@Test
}


And



public class XXX extends TestCase {
//test cases
}


I am trying to use the second one but I can not create ignore case. In first example I can use @Ignore. What about the second one?


Unit Testing PHP, SQLite with Phactory

I am writing a primer on PHP Unit testing from scratch and I am looking for some simple examples of unit testing SQLite databases using http://phactory.org/. Can you point me to some references or share your ideas?


Thank you!


Stubbing a stateful object in PHPSpec (or any unit testing framework)

How would you go about stubbing a DTO that also contains some logic (which kind of makes it more than a DTO anyway)? Would you even stub it? Consider this simple example:



class Context
{
/**
* @var string
*/
private $value;

function __construct($value)
{
$this->value = $value;
}

public function getValue()
{
return $this->value;
}

public function setValue($value)
{
$this->value = $value;
}


/*
* Some logic that we assume belong here
*/

}


class Interpreter
{
public function interpret(Context $context)
{
$current_context = $context->getValue();

if(preg_match('/foo/', $current_context ))
{
$context->setValue(str_replace('foo', 'bar', $current_context));

$this->interpret();
}

return $context->getValue();
}
}


Now, unit testing Interpreter in a PHPSpec fashion:



class InterpreterSpec
{
function it_does_something_cool_to_a_context_stub(Context $context)
{
$context->getValue()->shouldReturn('foo foo');

$this->intepret($context)->shouldReturn("bar bar");
}
}


Obviously this'd create an endless loop. How would you go about unit testing the Interpreter? I mean, if you just passed a "real" instance of Contextinto it, you'd rely on that objects behaviour, and it wouldn't really be a unit test.


NSubstitute Checking received calls don't work

Hey guys im new with the NSubstitute framework. I'm trying to test some of my classes, but when i use NSubstitute to check received calls it says received no matching calls.


I'm trying to test if the method Tick() is receiving update() from track class.


enter image description here


Does ShouldHaveValidationErrorFor exercise my code adequately?

If I have a Fluent Validator like this:



public class ContactDetailsViewModelValidator : AbstractValidator<ContactDetailsViewModel>
{
public ContactDetailsViewModelValidator()
{
RuleFor(x => x.PhoneNumberLandline)
.NotEmpty().When(x => string.IsNullOrEmpty(x.PhoneNumberMobile))
.WithValidationResource("Error_Resource_Empty_Landline")
.Matches(RegularExpressionConstants.LandlinePhoneNumber)
.WithValidationResource("Error_Resource_Format_Landline");
}
}


WithValidaitonResource is an extension method and is unfortunately implemented in such a way that if I was to do something like:



var validator = new ContactDetailsViewModelValidator();
valiidator.Validate()


It would fail due to internal dependencies on the class reading the resource file in WithValidaitonResource. I am on a large shared platform and refactoring to mock the dependencies would be difficult at present. I have noticed colleagues are currently unit testing similar validators using:



this.validator.ShouldHaveValidationErrorFor(x => x.PhoneNumberLandline, this.viewModel);


However, In the case of my method, I feel that this does not exercise the code adequately. It does test that the property has an error but not the correct one.


Is this all that can be done without fixing WithValidaitonResource so that I can call .validate() or is there a way to find if the correct error is present?


Thanks


Unit testing assert_equal error

I just started to learn Ruby and I'd like to ask question about unit testing assert_equal function. I wrote my first class and tried to test it with the file sent to my from my teacher. When I'm testing the code with my own program everything works fine but when testing with the one from my teacher I constantly get an error when testing line:



asser_equal('Jan Kowalski', p1.to_s)


Here is my code:



class Person

@@count=0

def initialize(name, surname)
@@count+=1
@name=name
@surname=surname
@nr=@@count
end

attr_reader :nr
attr_accessor :name, :surname, :count

def to_s
puts "#{@name} #{@surname}"
end

def create_string
print "Hello, my name is "
to_s
end
private :create_string
def say_hello
create_string
end

def Person.count_people()
if (@@count == 1)
puts "You have created #{@@count} person"
else
puts "You have created #{@@count} people"
end
end
end


Here's my teacher testing program:



require_relative 'person'
require 'test/unit'
require 'stringio'

module Kernel
def capture_stdout
out = StringIO.new
$stdout = out
yield
return out
ensure
$stdout = STDOUT
end
end

class TestPerson < Test::Unit::TestCase

def test_person
out = capture_stdout do
Person.count_people
end
assert_equal("You have created 0 people\n", out.string)

p1 = Person.new('Jan', 'Kowalski')
assert_equal('Jan', p1.name)
assert_equal('Kowalski', p1.surname)

assert_respond_to(p1, :to_s)
assert_equal('Jan Kowalski', p1.to_s)

assert_equal(1, p1.nr)
assert_respond_to(p1, :say_hello)
assert_raise(NoMethodError) { p1.nr = 2 }

out = capture_stdout do
p1.say_hello
end
assert_equal("Hello, my name is Jan Kowalski\n", out.string)

assert_raise(NoMethodError) { p1.create_string }

out = capture_stdout do
Person.count_people
end
assert_equal("You have created 1 person\n", out.string)

p1.name = 'Janina'
p1.surname = 'Kowalska'
assert_equal('Janina', p1.name)
assert_equal('Kowalska', p1.surname)

p2 = Person.new('Zbyszek', 'Wielki')
assert_equal(2, p2.nr)

out = capture_stdout do
Person.count_people
end
assert_equal("You have created 2 people\n", out.string)
end

end


And here's the error that i get:



Loaded suite test_person
Started
Jan Kowalski
F
===============================================================================
Failure:
test_person(TestPerson)
test_person.rb:29:in `test_person'
26: assert_equal('Kowalski', p1.surname)
27:
28: assert_respond_to(p1, :to_s)
=> 29: assert_equal('Jan Kowalski', p1.to_s)
30:
31: assert_equal(1, p1.nr)
32: assert_respond_to(p1, :say_hello)
<"Jan Kowalski"> expected but was
<nil>

diff:
? "Jan Kowalski"
? i
===============================================================================


Finished in 0.018998 seconds.

1 tests, 5 assertions, 1 failures, 0 errors, 0 pendings, 0 omissions, 0 notifica
tions
0% passed

52.64 tests/s, 263.19 assertions/s


Why do I get instead of "Jan Kowalski"?


How to use both NUnit extension and the NUnit VisualStudio Test Adapter

I have some problem to use both the NUnit Visual Studio Test Adapter and a framework extension class I've created. In particular my solution has 2 projects: the first one is a class library that contains some methods I must test and the second one is the test assembly. This assembly includes the extension class (I don't add the related dll to the addin folder inside the NUnit program directory because that extension class has been created exclusively for this assembly) that I have created to get and use the assertion failure messages.


Question: I would like to have a BIN folder (next to my solution folder) where place all the DLLs I need, and use this BIN folder to run my tests from Visual Studio (that is why I need the NUnit Visual Studio Test Adapter) and through the NUnit-x86.exe program (I mean through the NUnit GUI). At the moment I can run correctly all my test only by running them from Visual Studio... through the NUnit GUI the extension class does not work (I mean that my extension is invisible, I don't mean that there is something that gets error).


How do I have to set my project?


Extra info: I'm using NUnit 2.6.4 and VS 2013 Professional on a 64bit machine. To use the test adapter I followed this guide: http://ift.tt/1oooCOl


Angular Karma Unit Test: How to inject a mock service in karma configuration instead of in all test specs

Recently, I have added a security feature to an existing angular app. Here is what I got afterwards:



Chrome 3X.0.2125 (Linux) ERROR
Some of your tests did a full page reload!
Chrome 3X.0.2125 (Linux): Executed 23 of 102 (skipped 2) ERROR


This is how I have set up the security feature:



angular.module('myapp', [/*..I have omitted..*/])
.run(function(MyLoginSerivce, /*.. I have omitted ..*/)){
if(!MyLoginService.isLoggedIn()){
MyLoginService.redirectForLogin();
}else{
/* other logics */
}
}


I knew I have to add the following code to each and every test spec. But it sounds silly adding it to dozens of test files.



beforeEach(module(function($provide){
$provide.value("MyLoginServce", {
isLoggedIn: function(){
return true;
},
redirectForLogin: function {}
});
}));


Is there a way to tell Karma that use a mock service and add that piece of code only once and in one place?


Thanks


Moq does not subscribe to events in constructor

I am using Moq(4.2.1502.911) in my unit tests (xUnit). In a constructor, the object being constructed tries to subscribe to events of dependencies(arguments), but it does not seem to work.


The below code is an example to simulate the problem. Alarm class uses ICam interface dependency to alert when something moves.



public interface ICam
{
event EventHandler VisualDetected;
}

public class Alarm : ICam
{
private ICam _cam;

public Alarm(ICam cam)
{
_cam = cam;

// Subscribe to forward events, DOES NOT WORK
_cam.VisualDetected += VisualDetected;
}

public event EventHandler VisualDetected;

// If I call this method explicitly, test succeeds
public void Subscribe()
{
// Subscribe to forward events outside the constructor
_cam.VisualDetected += VisualDetected;
}
}


Below are unit tests.


First Test: In the constructor, Alarm object subscribes to ICam's event, but in unit test when I raise the event of the ICam mock object, Alarm's event is not raised.



[Fact]
public void Alarm_SubscribesInCtor()
{
var cam = new Mock<ICam>();
var alarm = new Alarm(cam.Object);
var raised = false;
alarm.VisualDetected += (o, e) => raised = true;

cam.Raise(c => c.VisualDetected += null, new EventArgs());

Assert.True(raised); // FAILS
}


Second Test: Explicitly calls Subscribe method and the test passes.



[Fact]
public void Alarm_SubscribesOutsideCtor()
{
var cam = new Mock<ICam>();
var alarm = new Alarm(cam.Object);
var raised = false;
alarm.VisualDetected += (o, e) => raised = true;
alarm.Subscribe();

cam.Raise(c => c.VisualDetected += null, new EventArgs());

Assert.True(raised); // SUCCEEDS
}


The problem seems to occur due to some kind of laziness at the initialization stage of mock objects but I am not sure about it.


Is there any solution or any other way to ensure event subscription?


Unit tests using Nocilla failing randomly

Current Setup : AFNetworking + Kiwi (for unit tests) + Nocilla (Stubbing network calls)


Issue Faced :


Some of the tests that use Nocilla to stub HTTP request fail randomly at times.



__block NSString *fetchedData;
[someData updateDataWithHandler:^(NSString *errorMessage, AFHTTPRequestOperation *operation) {
[[errorMessage should] equal:@"No response."];
fetchedData = (errorMessage) ? @"Failure" : @"Success";
}];

[[expectFutureValue(fetchedData) shouldEventuallyBeforeTimingOutAfter(4)] equal: @"Failure"];`


At times when this method fails, the assertion says



expected subject to equal (NSString) "Failure", got ((null)) (null)



Any ideas into what might be causing these tests to fail randomly ?


How to use multiple excel files as data source for unit test in C#?

I am trying to make a unit test that have data source from multiple excel files, each excel file contains a test case for the same unit test. I would like to put all the excel files in one folder and let the unit test program to iterate through all the excel files.


I have found several methods, like storing all the test cases in a XML file but this method is too tedious as I have to extract all the test cases from the excel files and put it into the same XML file. I hope to get an efficient way to do it. Any suggestions?


Android Studio unit testing: read data (input) file

In a unit test, how can I read data from a json file on my (desktop) file system, without hardcoding the path?


I would like to read test input (for my parsing methods) from a file instead of creating static Strings.


The file is in the same location as my unit testing code, but I can also place it somewhere else in the project if needed. I am using Android Studio.


How to write a Test for c# Delegate to a StoredProc?

How to write tests for delegate methods? or Beware of 2 open connections both with 'hooks' onto the same SQL table .... .


This was not straight forward to diagnose, test and prove is not a problem with my current solution.


How could I have TDD'd or written unit/integration tests to have trapped this? Redesign suggestions ...



  1. Create a connection to the table 'TransferWebTransmit' to process all rows.

  2. Execute a Reader to loop through 'old' records, (ID=1)

  3. Call a delegate method to process the 'old' record. (NB keep current connection open until all rows are processed i.e. have called the delegate).


Delegate method:



  1. Opens a new connection, executes a Stored Proc 'TransferWebTransmitUpdate'

  2. which -> Updates the table 'TransferWebTransmit' row (ID=1), then does a SELECT on (ID=1) row ----> cursor lock! ----> .Net throws "System.Data.SqlClient.SqlException (0x801 31904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding". ----> Connections are locked. ----> Have to Kill processes to recover


Here's the delegate method:



public int Update(int transferID)
{
var obj = new TransferWebMessage();

using (SqlConnection conn = base.GetNewConnection())
{
using (SqlCommand sp_cmd = new SqlCommand())
{
sp_cmd.CommandText = "TransferWebTransmitUpdate";

sp_cmd.CommandType = CommandType.StoredProcedure;
sp_cmd.Parameters.AddWithValue("TransferID", transferID);

sp_cmd.Connection = conn;
conn.Open();
SqlDataReader rdr = sp_cmd.ExecuteReader();
int roweffected;
while (rdr.Read())
{
roweffected = rdr.GetInt32(0),
}
}
}
return roweffected;
}


Here's the call to get the rows to process and call the delegate:



public void WatchForDataTransferRequests(_delegateMethod callback)
{
using (SqlConnection conn = new SqlConnection(_insol_SubscriberConnectionString))
{
// Construct the command to get any new ProductReview rows from the database along with the corresponding product name
// from the Product table.
SqlCommand cmd = new SqlCommand(
"SELECT [TransferID]" +
" FROM [dbo].[TransferWebTransmit]" +
" ORDER BY [TransferID] ASC", conn);

cmd.Parameters.Add("@CurrentTransferID", SqlDbType.Int);
conn.Open();

SqlDataReader rdr = cmd.ExecuteReader();

// Process the rows
while (rdr.Read())
{
Int32 transferID = (Int32)rdr.GetInt32(0);
callback(transferID);
}
}
}

ParallelParameterized - how to save test results in parallel for each input in same order as input?


@RunWith(ParallelParameterized.class)
public class RoutingResponseShortRegressionOneByOne {

private int numberOfProcessedRequests;
private Object currentRequestIndexLock = new Object();


@Test
public void compareNewResponseToBaselineReturnsNoLargeDifferences() throws IOException {

int currentRequestIndex;
synchronized (currentRequestIndexLock) {
//currentRequestIndex = e2EResultShort.completeRoutingResponses.size();
currentRequestIndex = numberOfProcessedRequests;
numberOfProcessedRequests++;
}


I try to run this code


and I see numberOfProcessedRequests == 0 in every run, and doesn't increment as I would expect from class member.


It's a ParallelParameterized Test.


Why does it create a new Test Class instance for every input?


I see the ctor is called only once.


Should there be a different way to run the test in parallel for each input and still maintain the order of the results as the order of the input?