samedi 28 février 2015

Test a Django site displays the correct graph - image similarity?

I'm making a simple Django site that graphs a specified data set with matplotlib and displays the resulting graph. I've saved a static copy of the correct graph, and I want to compare this against the image displayed on the page.


screen capture of the site


home.html



<img id="id_graph_image" src="/graphs/selected_graph.png">


graphs/urls.py



urlpatterns = patterns('',
# other patterns...
url(r'^graphs/selected_graph.png$', 'graphs.views.show_selected_graph'),
)


graphs/views.py



def show_selected_graph(request):
# matplotlib code...
canvas = FigureCanvasAgg(figure)
response = HttpResponse(content_type='image/png')
canvas.print_png(response)
return response


graphs/tests.py



def test_selected_graph_shows_the_right_graphs(self):
# request and response...
graph_on_page = Image.open(StringIO(response.content)).
expected_graph = Image.open('file://../graphs/static/scenario_1_statistic_1.png')
# TODO: compare graphs


I'd like to add a Selenium test or a unit test to confirm that the graph view returns the correct image for a given data set. I've tried comparing the two images with PIL and the RMS Difference of the histograms, but any two matplotlib graph images have similar histograms.


Is a "percentage difference" simple? Should I test this in a very different way?


I appreciate any and all help, thanks!


Android Studio Unit Test NullpointerException

When I running Unit Test in Android Studio, I got a



NullpointerException:null



in the status bar and the Icon before test is dark, what is the problem? the promption is so ambitious.enter image description here


Unit testing backpropagation neural network code

I am writing a backprop neural net mini-library from scratch and I need some help with writing meaningful automated tests. Up until now I have automated tests that verify that weight and bias gradients are calculated correctly by the backprop algorithm, but no test on whether the training itself actually works.


The code I have up until now lets me do the following:



  • Define a neural net with any number of layers and neurons per layer.

  • It can use any layer activation functions.

  • Using biases is also possible.

  • Layers of neurons can only be fully connected at the moment.

  • Training is only BP with gradient descent.

  • Must use train, validation and test sets (none of these sets can be empty at the moment).


Given all of these, what kind of automated test can I write to ensure that the training algorithm is implemented correctly. What function (sin, cos, exp, quadratic, etc) should I try to approximate? In what range and how densely should I sample data from this function? What architecture should the NN have?


Ideally, the function should be fairly simple to learn so the test wouldn't last very long (1-3 seconds), but also complicated enough to provide some degree of certainty that the algorithm is implemented correctly.


resharper test runner - how to execute by category?

Resharper version 7.1


Visual Studio 2012 Ultimate


Two part question:


Q1: When I select "run unit tests" in the solution explorer resharper automatically opens a test fixture runner and executes ALL test fixtures. However, I have categorized tests in to two categories: 1) "long running" and 2) "short running". Is there a way that after performing these steps that I can have the test runner execute "short running", only?


Q2: With the test fixture runner open, I select Group By "Categories". I see the two categories I mention above. I right click on "short running" and select "run tests". Expected is that only the "short running" tests will execute; however, BOTH the "long running" and "short running" execute. Is there a way to execute by category, and have only that category execute?


How do I unit test a helper that uses a service?

I'm trying to unit test a helper that uses a service.


This is how I inject the service:



export function initialize(container, application) {
application.inject('view', 'foobarService', 'service:foobar');
}


The helper:



export function someHelper(input) {
return this.foobarService.doSomeProcessing(input);
}

export default Ember.Handlebars.makeBoundHelper(someHelper);


Everything works until here.


The unit test doesn't know about the service and fails. I tried to:



test('it works', function(assert) {

var mockView = {
foobarService: {
doSomeProcessing: function(data) {
return "mock result";
}
}
};

// didn't work
var result = someHelper.call(mockView, 42);

assert.ok(result);
});

When using Mokito, what is the difference between the actual object and the mocked object?

In the program below, I am trying to use mockito with junit in my test case. But I don't see how Mokito is helping to create objects for my test? I don't see anything special here as it seems as if mokito is instantiating the actually object.



public class TestCase1{

@Mock
MyClass myClass;

public void setup(){

MokitoAnnotations.initMoks(this);

}

public void testAddition(){

when(myClass.add(2,2)).thenReturn(20);
assertEquals(4,myClass.add(2,2));
}
}


Is it mocking an object, the same as injecting(DI) an object? I appreciate your help!


Visual Studio unit test detect failure

So I am writing tests in Visual Studio 2015, and executing using MS UnitTesting to run it. What I want to do is write some code then when the test it done I can update a rally test case. What I am looking for is how to detect if the test case that just ran passed or failed. I have been looking at reflection but not seeing an option for the test



[TestCleanup()]
public void MyTestCleanup()
{
// Code to check if test passes or fails

Common.DriverQuit();
}


Then based off that answer I can write the rest of the code. I just need to figure how to gain access to test results if possible.


How is such test approach called?

thx u in advance!


Tell me pls, how is it called when I deploy copy of my site and write a scripts that test my classes in work environment? If I do it because it's difficult to test each class separately. Thx again!


i want to write gmock google test cases for the below scenario

i have set of functions within the Singleton class. i want to mock a function in the singleton class. Lets take the below piece of code.The function setname() will return the string from the classyyy's setname() funciton. so here i want to test the return value.so please tell me how to write the test case for this situation.



class mockBtMxxx : public BTMxxx
{
public:
MOCK_METHOD2(setname, string(const int& id, const string& name));
};

// Test case for Setting Local Device Friendly Name.
TEST(TestBTC, GMockSetNameTest)
{
mockBtMxxx mock_Btm;
int id = 12345;
string str = "Hello";
EXPECT_CALL(mock_Btm, setname(_,_)).WillOnce(Return("Hello"));
}


I am getting the below errors : error: ‘BTMxxx::BTMxxx()’ is private gmock-actions.h:491:66: error: no matching function for call to ‘ImplicitCast_(const char*&)’


Mockito - check if ANY method was called on an object(object was accessed)

I want to write a test that passes a mock object A into an object under test B and checks if ANY of the methods of A were called. To give some context, class B is designed to manipulate A in a specific way, based on a set of parameters, and under certain conditions it shouldn't do anything to it at all. So my goal is to test that scenario. I know how to test whether a specific method was called or not:



verify(A, never()).myMethod();


But I can't find a way to make sure that NONE of the methods A has were called. Is there a way to do this?


vendredi 27 février 2015

Tested class is calling actual object, instead of mocked object

I'm trying to mock an external library, however the actual object created in APKDecompiler is being used, instead of the mock object.


Test code



import com.googlecode.dex2jar.v3.Dex2jar;
import jd.core.Decompiler;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.easymock.PowerMock;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
import APKDecompiler;

import static org.easymock.EasyMock.expect;
import static org.easymock.EasyMock.expectLastCall;
import static org.junit.Assert.assertEquals;
import static org.powermock.api.easymock.PowerMock.*;

import java.io.File;
import java.io.IOException;

@RunWith(PowerMockRunner.class)
@PrepareForTest({Dex2jar.class})
public class TestAPKDecompiler {
//As this only uses external libraries, I will only test that they are called correctly by mocking them.
@Test
public void testAPKDecompiler() {
try {
File testFile = new File("ApkExtractor/src/test/resources/testApp.jar");
String expectedDirectory = testFile.getAbsolutePath().substring(0, testFile.getAbsolutePath().length() - 4);
mockStatic(Dex2jar.class);
Dex2jar mockApkToProcess = createMock(Dex2jar.class);
Decompiler mockDecompiler = createNiceMockAndExpectNew(Decompiler.class);


expect(Dex2jar.from(testFile)).andStubReturn(mockApkToProcess);

mockApkToProcess.to(new File(expectedDirectory + ".jar"));
expectLastCall();

PowerMock.expectNew(Decompiler.class).andReturn(mockDecompiler).anyTimes();

expect(mockDecompiler.decompileToDir(expectedDirectory + ".jar", expectedDirectory)).andReturn(0);


replay(mockApkToProcess);
PowerMock.replay(mockDecompiler);
replayAll();
String actualDirectory = APKDecompiler.decompileAPKToDirectory(testFile);

verify(mockApkToProcess);
verify(mockDecompiler);
verifyAll();

assertEquals(expectedDirectory, actualDirectory);
testFile.delete();
}
catch(Exception e){
e.printStackTrace();
}
}
}


Class code



import com.googlecode.dex2jar.v3.Dex2jar;
import jd.core.Decompiler;
import jd.core.DecompilerException;

import java.io.File;
import java.io.IOException;

public class APKDecompiler {
public static String decompileAPKToDirectory(File filename) throws IOException, DecompilerException {
String filenameWithoutFileExtension = filename.getAbsolutePath().substring(0, filename.getAbsolutePath().length() - 4);
Dex2jar apkToProcess = Dex2jar.from(filename);
File jar = new File(filenameWithoutFileExtension + ".jar");
apkToProcess.to(jar);
Decompiler decompiler = new Decompiler();

decompiler.decompileToDir(filenameWithoutFileExtension + ".jar", filenameWithoutFileExtension);

return filenameWithoutFileExtension;
}


I've tried this and I haven't had any luck. EasyMock: Mocked object is calling actual method


I get a FileNotFoundException when decompiler.decompileToDir is called, which shouldn't happen as I should be mocking the class.


Any help would be greatly appreciated.


Go Lang Concurrent Unit Test for Method with Mutex

Writing a very simply "load test" application in go lang for a homework assignment. I'm functionally complete, but am trying to write a concurrent unit test for a method that effectively resets a counter.



func (c *Counter) Reset(statistic string) {
c.Lock()
c.counters[statistic] = START_VALUE
c.Unlock()
}


Counter is a struct with a sync.RWMutex and a map[string]int. Here's the associated unit test function:



func TestReset(t *testing.T) {
counter := stats.New()
var wg sync.WaitGroup
wg.Add(3)
go func() {
defer wg.Done()
increment(counter, dataSetOne)
}()
go func() {
defer wg.Done()
counter.Reset(stats.KEY_100)
}()
go func() {
defer wg.Done()
increment(counter, dataSetTwo)
}()
wg.Wait()

actual := counter.Copy()

for k, v := range expectedReset {
if v != actual[k] {
t.Errorf("counter %s: expected %d, got %d", k, v, actual[k])
}
}
}


All I'm doing is adding a bunch of data to some set of keys, resetting one key to zero, then adding a bunch more data. The problem, I feel, is that my assertion for what's expected changes depending on the order of operations. The test assumes that each go routine will acquire the lock sequentially.


With the understanding that its a bit late to change the implementation to use channels, is there something I'm missing, or a better way to do this?


How to unit test removeObserver in dealloc

I am trying to write a failing test that would verify removeObserver is called when an object is dealloc'ed however, do to the fact that the object is no longer around, how do I determine this functionality? Am I going about testing this incorrectly? I am using OCMockito for my mocking framework.


Here is what I have so far.



- (void)test_dealloc_NotificationCenterRemoveObserver_ShouldCallRemoveObserver {
self.mockNotificationCenter = mock([NSNotificationCenter class]);
self.sut.defaultNotificationCenter = self.mockNotificationCenter;

self.sut = nil;

[MKTVerify(self.mockNotificationCenter) removeObserver:anything() name:UIContentSizeCategoryDidChangeNotification object:nil];
}

Declaring facts that are true only within the scope of a single unit test in SWI-Prolog

As an example for this question, I have a very simple Prolog file main.pl, in which I've defined the colours of some shapes.



colour(circle, red).
colour(triangle, red).
colour(square, blue).


Now below that I define a predicate same_colour/2, which is true if both S1 and S2 are the same colour.



same_colour(S1, S2) :-
colour(S1, C),
colour(S2, C).


Testing at the top level indicates that this predicate works as expected.



?- same_colour(circle, triangle).
true.

?- same_colour(circle, square).
false.


I am trying to write unit tests using SWI-Prologs unit testing framework plunit for same_colour/2, but I want to declare facts within each individual test that are only true within the scope of that test. I've tried using the setup option for individual tests, as well as asserta, neither of which work. All of the below tests fail.



:- begin_tests(same_colour).

test(same_colour) :-
colour(shape_a, colour_1),
colour(shape_b, colour_1),
same_colour(shape_a, shape_b).

test(same_colour) :-
asserta(colour(shape_a, colour_1)),
asserta(colour(shape_b, colour_1)),
same_colour(shape_a, shape_b).

test(same_colour, [
setup(colour(shape_a, colour_1)),
setup(colour(shape_b, colour_1))
]) :-
same_colour(shape_a, shape_b).

:- end_tests(same_colour).


I've also tried:



test(same_colour, [
setup(asserta(colour(shape_a, colour_1))),
setup(asserta(colour(shape_b, colour_1))),
cleanup(retract(colour(shape_a, colour_1))),
cleanup(retract(colour(shape_b, colour_1)))
]) :-
same_colour(shape_a, shape_b).


that is, first declare that colour(shape_a, colour_1) and colour(shape_b, colour_1) are facts, do the test, then 'undeclare' them. However, this test fails as well. Using trace it seems that colour(shape_a, colour_1) is never asserted (or at least is not true while my test is running.)



Call: (18) plunit_same_colour:'unit body'('same_colour@line 13', vars) ? creep
Call: (19) same_colour(shape_a, shape_b) ? creep
Call: (20) colour(shape_a, _G738) ? creep
Fail: (20) colour(shape_a, _G738) ? creep
Fail: (19) same_colour(shape_a, shape_b) ? creep
Fail: (18) plunit_same_colour:'unit body'('same_colour@line 13', vars) ? creep


I can understand now why the first two tests do not work. In the first I am testing whether colour(shape_a, colour_1) is true, when it hasn't been declared before, and in the second I just don't think it is correct to use asserta from within a predicate definition. Though it feels like something similar to my third or fourth test should be able to achieve what I am trying to do?


Most convenient way to unit test Xml/XElement result

I have some codes heavily uses XElement class to build segments of Xml, for example code looks like



XNamespace ns = "ns";
XElement myXml = new XElement(
ns + "filter",
new XElement(
ns + "and",
new XElement(
ns + "equals",
new XAttribute("name", "uid"),
new XElement(ns + "value", "some text"))));


It eventually spits out some Xml code equivalent to



<ns:filter>
<ns:and>
<ns:equals name="uid">
<ns:value>some text</ns:value>
</ns:equals>
</ns:and>
</ns:filter>


Now I need to unit test the logic go going through xpath, getting attributes and so on. I can always write my Linq-Xml to check the data, but it is very tedious since I need to unit test quite lot of similar codes.


I looked into fluent assertions, it is very close, but it does not seem to be able to validate nested element's value.


Any one has some good recommendation?


iOS Swift Mocking CKContainer for Unit Testing

I am using CloudKit in my application and am trying to mock CKContainer to test my Managers. Here is what i tried:



func testAccountStatus() {

class MockCloudContainer: CKContainer {

override func accountStatusWithCompletionHandler(completionHandler: ((CKAccountStatus, NSError!) -> Void)!)
{
completionHandler(CKAccountStatus.NoAccount, NSError())
}
}

let loginManager = LoginManager.sharedInstance
let expectation = expectationWithDescription("iCloudStatus")

var isTestFinished = false
loginManager.iCloudStatusWithCompletionHandler { (status, error) -> Void in

if (isTestFinished) {
return
}

expectation.fulfill()
XCTAssertTrue(status.isEqualToString("NoAccount"), "Status is Available")

}

waitForExpectationsWithTimeout(5, { error in
isTestFinished = true
XCTAssertNil(error, "Error")
})


But i am getting error while compiling the code



:0: error: cannot override 'init' which has been marked unavailable



What is best way I am using mock object to test my LoginManager class ?


Thanks


Is it possible to perform UI tests in a iOS Library?

I am creating a iOS library with a class that extends UIView and I need to perform unit testing on it. But this custom class need to be inserted in a ViewController and visible to the user to execute correctly. It does also check if the view.window != nil.


In a iOS application unit test method, I've inserted a UIViewController using:



UIViewController *testViewController = [[MyViewController alloc] init];
UIApplication.sharedApplication.keyWindow.rootViewController = testViewController;


With that I could insert my custom view in the view controller and perform unit tests in it.


But in the iOS library project unit test method the UIApplication.sharedApplication returns nil.


Is there an alternative way to perform UI test in a iOS library?


Unit Testing a Windows Service Event Handlers

I am writing a Windows Service and I want to write unit tests against the Event Handlers for checking how they manage the worker thread. I've managed the OnStart, OnStop and OnPause with no problem, but the OnContinue is causing me some angst.


Windows Service


The Program.Main()


This does nothing out of the ordinary - it just sets up the Service to run.


MyService.cs This has a Manual Reset Event:



private static ManualResetEvent _resetEvent = new ManualResetEvent(false);


MyService.OnStart()


This creates a Worker thread, so that the Main thread can continue listening to events from the Service Control Manager


MyService.OnStop()


This checks if the Workerthread.IsAlive is true, and if so, calls .Abort() on it.


MyService.OnPause()


This checks if the Worker's ThreadState is either Suspended | SuspendRequested, and if not then it calls .Reset() on the _resetEvent.


Note: currently unsure if these are the ideal states, or if I should be looking at the state 'WaitSleepJoin'.


MyService.OnContinue()


This checks if the Worker's ThreadState is either Suspended | SuspendRequested, and if so then it calls .Set() on the _resetEvent.


MyTests


OnStart() tests


This is easy, I just call the Event Handler and it sees there's no Worker thread, so creates one.


OnStop() tests


Slightly more complex. Here in the test I create a Worker thread that calls a delegate which sends it to sleep. I then use Reflection to set it as the Worker thread private property in the Service, and then call the OnStop() event handler. This finds the started Worker Thread which is sleeping, so aborts it.


OnPause() tests


Similar to the OnStop() tests, but instead of creating a sleeping thread, I create one that iterates through a small loop (that's big enough last a few seconds). When I call the OnPause() method it finds the Worker Thread which is working, and calls .Reset() on the _resetEvent.


OnContinue() tests


So my plan here was to pattern the OnPause and OnStop tests, but this time create a worker thread that was currently suspended. However, in .NET 4.5, Thread.Suspend() is obsolete.


I therefore created a local ManualResetEvent [one per unit test; I have multiple Unit Tests for the OnContinue() event handler] and in my delegate method that the Worker thread is running, I set it to call .WaitOne(). If I run one test, then that works fine. But if I run them all together then it appears that the Unique ManualResetEvent for each test somehow affects the other tests.


Example code from just one Unit Test where I create the Worker Thread and inject it into the Service's private thread method:



const BindingFlags InstanceFlags =
BindingFlags.Instance | BindingFlags.NonPublic;

PropertyInfo prop = t.GetProperty("MyWorkerThread", InstanceFlags);
EventWaitHandle wh2 = new ManualResetEvent(true);
var thread = new Thread(() =>
{
var sb = new Lazy<StringBuilder>();
for (int i = 0; i < 10000000; i++)
{
wh2.WaitOne();
sb.Value.Append(i.ToString());
}
});

thread.Start();
prop.SetValue(service, thread, null);


So when I duplicate this in another test, but have "wh3" rather than "wh2", then my tests all seem to trip over themselves.


Tools Using Visual Studio 2013 and the NUnit test runner.


Would appreciate it if anyone could point out where I'm going wrong, or better still, point me to a better way to achieve this.


Many thanks


Griff


How to properly clean up after using an HttpClient in a unit test

In a unit test using an Apache HttpClient to fire requests, I have seen the following setup and cleanup code:



private HttpClient httpClient;
private HttpRequestBase httpRequest;


@Before
public void setUp() throws Exception {
httpClient = new DefaultHttpClient();
}

@After
public void closeRequests() {
if (httpRequest != null) {
httpRequest.releaseConnection();
httpRequest = null;
}
}


The tests than e.g. send get requests and check the response:



@Test
public void getSomething() throws Exception {
httpGet = new HttpGet("http://some/url");
HttpResponse response = httpclient.execute(httpGet);
assertThat(response.getStatusLine().getStatusCode(), is(HttpStatus.SC_OK));
}


Now my question is: Do these tests properly clean up after themselves? From what I understand, the releaseConnection() call only hands back the connections to the client's connection manager but doesn't actually close it.


So shouldn't the tests rather do this:



@After
public void closeConnections() {
httpClient.getConnectionManager().shutdown();
}


And would this properly close all connections even without calling releaseConnection() on the http request instances?


Grails mocking method in service which uses rest plugin

I'm very new to Grails but have been using Ruby on Rails for the past few months, I can't seem to get my head around how to correctly Mock some functionality within a Service I have so I can properly Unit Test it.


I have a RestService which uses a RestBuilder plugin



import javax.persistence.Converter;
import org.codehaus.groovy.grails.web.json.JSONArray
import org.json.JSONObject

import grails.converters.JSON
import grails.plugins.rest.client.RestBuilder
import grails.transaction.Transactional

@Transactional
class RestService {

def retrieveFromRESTClient(url) {
System.properties.putAll( ["http.proxyHost":"proxy.intra.bt.com", "http.proxyPort":"8080", "https.proxyHost":"proxy.intra.bt.com", "https.proxyPort":"8080"] )

def restBuilder = new RestBuilder()
def clientResponse = restBuilder.get(url)

// For development purposes
print "retrieveFromRESTClient: " + clientResponse.json

return clientResponse
}
}


I'm attempting to write a Unit Test for retrieveFromRESTClient() and my thoughts are I should be Mocking the restBuilder.get() plugin call so it doesn't go off and actually do a get request to a URL during the Test. I've attempted a few things already such as extracting the plugin functionality to it's own method:



def retrieveFromRESTClient(url) {
System.properties.putAll( ["http.proxyHost":"proxy.intra.bt.com", "http.proxyPort":"8080", "https.proxyHost":"proxy.intra.bt.com", "https.proxyPort":"8080"] )

def clientResponse = getResponse(url)

// For development purposes
print "retrieveFromRESTClient: " + clientResponse.json

return clientResponse
}

def getResponse(url) {
def restBuilder = new RestBuilder()
def resp = restBuilder.get(url)
resp
}


and in my RestServiceSpec attempting to mock getResponse



import org.springframework.http.ResponseEntity;

import grails.plugins.rest.client.RestBuilder;
import grails.test.mixin.TestFor
import groovy.mock.interceptor.MockFor
import spock.lang.Specification

@TestFor(RestService)
class RestServiceSpec extends Specification {

def cleanup() {
}

void "retrieveFromRESTClient's responses JSON can be accessed"() {
when:
service.metaClass.getResponse { ResponseEntity<String> foo -> return new ResponseEntity(OK) }
def resp = service.retrieveFromRESTClient("http://ift.tt/1MYRQQX")

then:
print resp.dump()
assert resp.json == [:]
}
}


Although this test passes, when I look at resp.dump() in the test-reports I see it's still gone and made a request to 'mocking.so.it.doesnt.matter' and returned that object instead of the mocked ResponseEntity which I assumed it would return.


test-report output:



retrieveFromRESTClient: [:]<grails.plugins.rest.client.RestResponse@433b546f responseEntity=<404 Not Found,<HTML><HEAD>
<TITLE>Network Error</TITLE>
<style>
body { font-family: sans-serif, Arial, Helvetica, Courier; font-size: 13px; background: #eeeeff;color: #000044 }
li, td{font-family: Arial, Helvetica, sans-serif; font-size: 12px}
hr {color: #3333cc; width=600; text-align=center}
{color: #ffff00}
text {color: #226600}
a{color: #116600}
</style>
</HEAD>
<BODY>
<big><strong></strong></big><BR>
<blockquote>
<TABLE border=0 cellPadding=1 width="80%">
<TR><TD>
<big>Network Error (dns_unresolved_hostname)</big>
<BR>
<BR>
</TD></TR>
<TR><TD>
RYLCR-BC-20
</TD></TR>
<TR><TD>
Your requested host "mocking.so.it.doesnt.matter" could not be resolved by DNS.
</TD></TR>
<TR><TD>

</TD></TR>
<TR><TD>
<BR>

</TD></TR>
</TABLE>
</blockquote>
</BODY></HTML>
,{Cache-Control=[no-cache], Pragma=[no-cache], Content-Type=[text/html; charset=utf-8], Proxy-Connection=[close], Connection=[close], Content-Length=[784]}> encoding=UTF-8 $json=[:] $xml=null $text=null>


My end goal is to bypass the plugins get call and return a ResponseEntity object. I'm not really sure if I'm using the correct approach for this?


Python Nose tests, SQLAlchemy, and "convenience functions"

I have a question about how to design good Nose unit tests (using per test transactions and rollbacks) not just around SQLAlchemy models, but also around convenience functions I've written which surround the creation of SQLAlchemy models.


For example, I have a decent understanding of how to write a basic unit test class, which includes the necessary setup and teardown fixtures to wrap all of the tests in transactions, and roll them back when the test is complete. However, all of these tests so far involve directly creating models. For example, testing a User model like so (BaseTestCase contains setup/teardown fixtures):



from Business.Models import User

class test_User(BaseTestCase):

def test_user_stuff(self):
user = User(username, first_name, last_name, ....)
self.test_session.add(user)
self.test_session.commit()
# do various test stuff here, and then the
# transaction is rolled back after the test ends


However, I've also written a convenience function which wraps around the creation of a User object. It handles various things like confirming password matches verification password, then creating a salt, hashing password + salt, etc, and then putting those values into the relevant columns/fields of the User table/object. It looks something like this:



def create_user(username, ..., password, password_match):

if password != password_match:
raise PasswordMatchError()

try:
salt = generate_salt()
hashed_password = hash_password(password, salt)

user = User(username, ..., salt, hashed_password)
db_session.add(user)
db_session.commit()

except IntegrityError:
db_session.rollback()
raise UsernameAlreadyExistsError()

return user


There's obviously more to it than that, but that's the gist. Now, I'd like to unit test this function as well, but I'm not sure the correct way to wrap this in unit test cases which implement using a test database, rolling back transactions after every test, etc.



from Business.Models.User import create_user

class test_User(BaseTestCase):

def test_create_user_stuff(self):
user = create_user(username, first_name, last_name, ....)
# do various test stuff here, and then how do
# I finangle things so the transaction executed by
# create_user is rolled back after the test


Thanks in advance for the help, and pointing me in the right direction.


Unit testing validation with express-validator

How can I unit test my validations that are done using express-validator?


I have tried creating a dummy request object, but I get the error: TypeError: Object #<Object> has no method 'checkBody'. I am able to manually test that the validation works in the application.


Here is what I have tried:



describe('couponModel', function () {
it('returns errors when necessary fields are empty', function(done){
var testBody = {
merchant : '',
startDate : '',
endDate : ''
};
var request = {
body : testBody
};
var errors = Model.validateCouponForm(request);
errors.should.not.be.empty;
done();
});
});


My understanding is that the checkBody method is added to the request object when I have app.use(expressValidator()) in my express application, but as I am only testing that the validation is working in this unit test I do not have an instance of the express application available, and the validation method that I am testing is not called directly from it anyway as it is only called through a post route, which I do not want to call for a unit test as it involves a database operation.


Bitronix + Spring tests + Different spring profiles

I have several tests which all extends the same root test which define the Spring test application context. One of my test use a different profile so I have annotated the child class with @ActiveProfiles("specialTestProfile"), this profile create a special mock bean which is injected in the context. I want to clear my context before and after executing this test, but I didn't find the correct way to do it. I know that the Spring test framework does some context caching and that in my case I should have two different context and it should not be necessary to reload the context but it is not working because of bitronix which generate this strange error if I don't clean the context:



Caused by: bitronix.tm.resource.ResourceConfigurationException: cannot create JDBC datasource named unittestdb
at bitronix.tm.resource.jdbc.PoolingDataSource.init(PoolingDataSource.java:57)
at sun.reflect.GeneratedMethodAccessor404.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1608)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1549)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1479)
... 62 more
Caused by: java.lang.IllegalArgumentException: resource with uniqueName 'unittestdb' has already been registered
at bitronix.tm.resource.ResourceRegistrar.register(ResourceRegistrar.java:55)
at bitronix.tm.resource.jdbc.PoolingDataSource.buildXAPool(PoolingDataSource.java:68)
at bitronix.tm.resource.jdbc.PoolingDataSource.init(PoolingDataSource.java:53)
... 68 more


Even if I reload the context for each test class (by annotating my parent class with '@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)', I still get the error above at some point... do you have any idea how to solve this problem?


Pointless unit test

I have a baseService class that most of my services inherit from, which looks like this.



public abstract class BaseService<T> : IBaseService<T>
where T : class, IBaseEntity
{
protected IDataContext _context;
protected IValidator<T> _validator = null;

protected BaseService(IDataContext context)
{
_context = context;
}

protected BaseService(IDataContext context, IValidator<T> validator)
: this(context)
{
_validator = validator;
}

public virtual async Task<ICollection<T>> GetAllAsync()
{
return await _context.Set<T>().ToListAsync();
}

public virtual Task<T> GetAsync(long id)
{
return _context.Set<T>().Where(e => e.Id == id).FirstOrDefaultAsync();
}

public virtual Task<ValidationResult> ValidateAsync(T t)
{
if (_validator == null) throw new MissingFieldException("Validator does not exist for class " + t.GetType().ToString() + ". override method if no validation needed");
return _validator.ValidateAsync(t);
}

public virtual async Task<int> AddAsync(T t)
{
var results = await ValidateAsync(t);

if (!results.IsValid) {
throw new ValidationException(results.Errors);
}

if (_context.GetState(t) == EntityState.Detached)
{
_context.Set<T>().Add(t);
_context.SetState(t, EntityState.Added);
}

return await _context.SaveChangesAsync();
}

public virtual async Task<int> UpdateAsync(T updated)
{
var results = await ValidateAsync(updated);

if (!results.IsValid)
{
throw new ValidationException(results.Errors);
}

if (_context.GetState(updated) == EntityState.Detached)
{
_context.SetState(updated, EntityState.Modified);
}

return await _context.SaveChangesAsync();
}

public virtual Task<int> DeleteAsync(T t)
{
_context.SetState(t, EntityState.Deleted);

return _context.SaveChangesAsync();
}
}


Am i right in thinking that it is pointless to unit test this in every single one of my classes that implements this service? But instead, test the functionality for each test in my integration testing?


Finding useless unit tests with PIT

Assume we have a code we'd like to test:



class C {
int doSmth() {
return 1;
}
}


Now assume we have 2 unit tests placed within a single class. The 1st one "test everything" while the 2nd one "does nothing":



@RunWith(JUnit4.class)
public final class CTest {

@Test
@SuppressWarnings("static-method")
public void testDoSmth() {
assertEquals(1, new C().doSmth());
}

@Test
@SuppressWarnings("static-method")
public void testDoSmth2() throws Exception {
Thread.sleep(1000);
}
}


This is an IRL example: I've seen dozens of tests "fixed" by replacing the test contents with some useless code, as the contract of code being tested changes over time.


Now, PIT "entry" unit is a class containing test methods (not an individual test metod itself), so in the above case PIT will not only show 100% line coverage, but also 100% mutation coverage.


Okay, I'm relieved to know I have 100% mutation coverage, but how do I identify a useless test -- testDoSmth2() in the above case (provided my mutation coverage is high)?


Testing your test cases

What is a good practice for testing test cases, checking them for false positives?


For example, I am writing a mission-critical class in JavaScript. To make sure every little nook and cranny is covered, we have well over 300 unit tests for it. From time to time, the code changes, breaks--and one or two tests continue to come back positive--usually because of JavaScript's willingness to convert various data types to boolean.


Are there well-worn patterns out there in the world used by great QA teams to hunt for false positives?


Mocking and Unit Test Coverage library in C

Which is a good library for mocking in C, similar to Mockito in Java ? I tried cmock, but was not successful and found documentation hard to understand.


Also suggest a unit test coverage framework for C, which can be integrated to Eclipse where I can see the covered and uncovered statements.


Someone suggested SWIG, where you convert C code to python and do the mocking using libraries in python. Will this be better than native C mocking ?


Run NUnit tests project before iOS/Android application

I have PCL library project where main code part located. And I have separate iOS and Android projects. I've created project with NUnit for testing PCL library code. The tests separately are running pretty well. But I want to run this tests right before building iOS/Android project. I've tried to add custom command with nunit-console. But I didn't had a success. Is it possible at all?


Method setUp in android.test.AndroidTestCase not mocked

I'm trying to come with terms with the new unit test feature of Android Studio. I've followed the instructions on http://ift.tt/1KkPIk8. The description there explicitly mentions the 'Method ... not mocked' error and suggests to put the following into the build.gradle:



android {
// ...
testOptions {
unitTests.returnDefaultValues = true
}
}


This works in so far as the tests run when started from the command line with



gradlew test --continue


but not when I run the test class from Android Studio with rightclick -> run. This way, I get the same error again:



java.lang.RuntimeException: Method setUp in android.test.AndroidTestCase not mocked. See http://ift.tt/1zlzRtn for details.
at android.test.AndroidTestCase.setUp(AndroidTestCase.java)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)


Any ideas on how to solve this?


Sometimes running xcode test on CLI returns "manager not ready"

I normally run my xcode unit tests on command line using these commands:


clean:



xcodebuild -workspace appName.xcworkspace -scheme "Shared appName" -destination "platform=iOS Simulator,name=iPhone 5s,OS=8.1" clean


then build:



xcodebuild -workspace appName.xcworkspace -scheme "Shared appName" -destination "platform=iOS Simulator,name=iPhone 5s,OS=8.1" build


then test (with dry run):



xcodebuild -workspace appName.xcworkspace -scheme "Shared appName" -destination "platform=iOS Simulator,name=iPhone 5s,OS=8.1" test -dry-run


and I sometimes I get this error:



2015-02-27 11:01:50.417 Registering for testmanagerd availability notify post.
2015-02-27 11:01:50.417 testmanagerd availability notify_get_state check indicated manager not ready, waiting for notify post.
2015-02-27 11:02:50.371 60s elapsed since launch without testing starting, sending logs to stderr


any idea how to prevent this from happening? I'm assuming that the testmanagerd is a test daemon or something? Where can I find documentation about that?


Use same junit temporary folder for multiple unit tests

In my test class I use junit's temp folder. Then I create a new pdf file in this folder, write to this file and make my assertions. My unit test looks like:



@Test
public void testGeneratePdfIntegration1() throws Exception {

InputStream isMxml = SuperBarcodeProcTest.class.getResourceAsStream(RES_MXML_JOB_1);
InputStream isThumb = SuperBarcodeProcTest.class.getResourceAsStream(RES_PNG_JOB_1);
InputStream isPpf = SuperBarcodeProcTest.class.getResourceAsStream(RES_PPF_JOB_1);

Path destination = tempFolder.newFile("target1.pdf").toPath();

superBarcodeProc = new SuperBarcodeProc(isThumb, isMxml, isPpf, destination.toString());
superBarcodeProc.setDescription("Bogen: 18163407_01_B04_ST_135gl_1000_1-1");
superBarcodeProc.setBarcode("18163407_01");
superBarcodeProc.generatePdf();

assertTrue(Files.exists(destination));
assertTrue(Files.size(destination) > 1024);
}


After the test ends, the temp folder is being deleted. The problem is that I have multiple unit tests which generate pdf file with different settings in the same temp folder like the test in the code I've provided and when I run all tests in the class only the first one successes. My guess is that after the first test ends the temp folder is gone and the other tests fail with the IOException saying that the system cannot find the given path. The question is how can I use the same folder for multiple unit tests without the folder being deleted or is this impossible and I have to create a temp folder for each test case?


jeudi 26 février 2015

Intercept Method/Property call in c#

In the code below I have a class Foo which is called (without an interface) by my main method. There is no backing field or setter for the property, instead it calls a private method. Foo cannot be changes, nor can the usage of foo be changed to an IFoo interface.


- How do I change the value of foo.FooValue?


- Is there anything in the System.Reflection, System.Reflection.Emit, .NET standard libraries etc (unsafe code, whatever) that I can include in a unit test to change the return value?


I appreciate if there is something it's bound to be quite "evil", but I am interested in "evil" answers.



public class Program
{
public static void Main(){

Foo foo = new Foo();
int bar = foo.FooValue;
}
}

public class Foo{

public int FooValue
{
get
{
return this.FooMethod();
}
}

private int FooMethod
{
return 0;
}
}


Related questions:


How to set value of property where there is no setter - Related but unanswered - Maybe the answer is "no", but I'm not convinced by the top answer which merely points out you can't achive this by changing a (non-existent) backing field.


Intercept call to property get method in C# - Interesting. Not sure whether this is my answer and if it is, not sure how it could be used in a unit test.


SQL Server Unit Testing Stored Procedures - generating testdata

I am writing stored procedure unit tests in VS2012 on 2008R2 for a database that keeps a large number of stored procedures, tables and foreign keys.


For each stored procedure test I generate few lines of data in related tables before performing the test.


I have recognized that this practice will make the tests very sensitive to database changes especially to the additions of not null columns or extra keys.


The cascading impact of such changes may result in having to keep a lot of tests in sync. Some tests may even have nothing to do with the particular change but share one or more related tables therefore will fail on preparation.


Also, a rather inconvenient consequence of this is that it is hard to make disctinction between tests failed on testing conditions and those failed on key violation during preparation.


Thinking on a large scale the working hour consequences may be serious.


Anything I found so far on this topic has been way too general.


Now comes the question: does a relevant best practice exist for the question of test data in dev db vs. generating test data within test?


Pyglet running multiple windows

I have problem when running some test code. There is a lot of code so I will paste only summary of a problem:



import pyglet

class Test(object):
def setUp(self):
self.window = pyglet.window.Window()

def tearDown(self):
del self.window

def wtf(self):
self.setUp()
self.tearDown()
self.setUp()
pyglet.app.run()

test = Test()
test.wtf()


I would expect upper code to open 1 window, however it opens 2.


How can I fix this problem?


Python print name of a unit test function in setup

After reading How to get the function name as string in Python?


I wondered if it is somehow possible to put a logging statement into one of the



def setUp(self):
...

def tearDown(self):
...


and print the name of the current test function.


Front end javascript testing using Require and Resharper

So I've been trying to figure out how front end testing works (unit testing) but I am getting stuck on some point.


So I have my jasmine test set up as follows:



describe('Blabla', function () {

it('returns true', function () {
var people = require(["people"], function(ppl) {
return ppl;
});
expect(people.getTitle()).toBe('People piolmjage');
});
});


But running this gets me:



TypeError: undefined is not a funtion



So obviously, people is undefined. So perhaps my callback comes in too late. But if I remove the callback I get following error:



it('returns true', function () {
var people = require("people");
expect(people.getTitle()).toBe('People piolmjage');
});



Error: Module name "people" has not been loaded yet for context: _. Use require([])



I figure there is something wrong in my setup...Anyone have any idea how to get this FE testing to work?


I did manage to get it to work from console and using define combined with phantomjs and the durandal test files but I need this to work outside of the console and hereby I cannot use this define because the test runner won't find my tests.


That's why I need to use the CommonJS was of getting the required viewmodels.


unit test case for Javascipt function which access HTML elements from FORM

I am new to JAVA script unit testing.


We are converting a PHP legacy application to Symfony2 framework. We are planning to use the legacy javascript library. The TDD approach is working for Controller and Services (PHP code).


For Javascript functions TDD is not required, as we are successfully able to use the legacy Javascript functions by making the elemenent IDs similar to the old code in new Symfony2 forms (twigs).


For future enhancement and unit testing of javascript code also, we need to write unit test cases for these legacy functions also. We are planning to use Qunit for this.


An example function is



function DetailDrop(fieldname) {
fn = fieldname.id;
var field = fn.substring(0, fn.length - 2);
if (document.getElementById(field + "_C") && document.getElementById(field + "_C").checked == true && document.getElementById(field + "_D")) {
document.getElementById(field + "_D").style.display = 'inline';
}
else if (((document.getElementById(field + "_Y") && document.getElementById(field + "_Y").checked == true) || (document.getElementById(field + "_/") && document.getElementById(field + "_/").checked == true)) && document.getElementById(field + "_D")) {
document.getElementById(field + "_D").style.display = 'none';
document.getElementById(field + "_D").selectedIndex = 0;
}
else if (((document.getElementById(field + "_N") && document.getElementById(field + "_N").checked == true) || (document.getElementById(field + "_P") && document.getElementById(field + "_P").checked == true)) && document.getElementById(field + "_D")) {
document.getElementById(field + "_D").style.display = 'inline';
}


}


Giving the problem context above, I am not able to find out a way to unit test such javascript functions, which access HTML elements from a web form.


In such a scenario should I look for options to mock the complete web form


or am I totaly out of context here?


How to reuse tests in Yii2 Codeception

I am setting up new project based on Yii2 and Codeception. I am using advanced app template (backend, common, frontend).


I want most of frontend ActiveRecords to be "ReadOnly" so I made special Trait where I am blocking appropriate methods like save, insert, delete, update, ...



trait ReadOnlyActiveRecord {

/**
* @throws \yii\web\MethodNotAllowedHttpException
*/
public function save($runValidation = true, $attributeNames = null)
{
return self::throwNotAllowedException(__FUNCTION__);
}

//...


It simply throws an MethodNotAllowedHttpException.


Now I use this Trait in multiple frontend ARs and want to test them using Codeception like following



use \Codeception\Specify;

/**
* @expectedException \yii\web\MethodNotAllowedHttpException
*/
public function testSaveMethod()
{
$model = new AppLanguage();

$this->specify('model should be unsaveable', function () use ($model) {

expect('save function is not allowed', $model->save())->false();
});
}

/**
* @expectedException \yii\web\MethodNotAllowedHttpException
*/
public function testInsertMethod()
{
$model = new AppLanguage();

$this->specify('model should be unisertable', function () use ($model) {

expect('insert function is not allowed', $model->save())->false();
});
}

// ...


I now I am figuring out how to use these tests in multiple TestCests so I won't rewrite the code again and again in each of Cest.


So I am thinking about something like



/**
* Tests ActiveRecord is read only
*/
public function testReadOnly()
{
$model = new AppLanguage();

$this->processReadOnlyTests($model);
}


So my question is:


Where to put the test methods and how to include and call them in specific Cests?


Any suggestions?


Thank you.


How do I test that a method was called within a grails asynchronous promise?

I'm trying the verify that a method contained in a promise was called, using a unit test.


I have a CallerService, that will call classA.methodA() within a task,



class CallerService {

def classA

def callA() {
Promise p = task {
classA.methodA()invocation)
}
p.onComplete {
println "complete"
}
p.onError { Throwable err ->
println "there was an error"
}
}
}


and a unit test that mocks ClassA and tries the verify that methodA was called once.



def "calling A"() {
given:"a mock classA"
def mockClassA = Mock(ClassA)
service.classA = mockClassA

when:"callA is called"
service.callA()

then:"methodA should be called once"
1* mockClassA.methodA()
}


The tests fails because the mock was called twice.



| Failure: calling A(promisespike.CallerServiceSpec)
| Too many invocations for:
1* mockClassA.methodA() (2 invocations)
Matching invocations (ordered by last occurrence):
2 * mockClassA.methodA() <-- this triggered the error


Is this the result I should be expecting or have I setup my test incorrectly?


How to separate business logic from web framework for testing?

For testing web applications, many sources suggest keeping your business logic as free from the web framework as possible to make it possible to test without needing to use the web framework. How can this be done?


If I have a registration page, I need to validate the HTTP POST data (e.g. making sure date of birth is a date and required fields are given), then I'd need to persist the validated data to the database. I'm not sure how I can separate the parts related to the web framework from the business logic, as it's quite a simple piece of functionality. Same applies to login pages, account deletion pages, etc.


I imagine the web framework wouldn't be doing much other than passing the HTTP data to the business logic function, which would then do everything (validation, persistence) that the web framework function was doing before this refactoring. So I don't see what I'm gaining. For example it would become



businessLogicRegister(
request.data['email'],
request.data['birthday'],
request.data['username']
);


Could someone give some examples of what such basic functions like registration/login might look like in a basic Node JS framework like Express or Koa and a good way to separate business logic from the web framework? And how that would help to make the code more unit-testable?


How to inject $state correctly for angular ui router unit tests?

I am trying to write Unit-Tests for the following use case:


A http-request responds with an unathorized http-status(401) -> change state to login


The Problem is, that I dont even get the correct $state parameter injected into my test.



describe('Login: ', function() {
beforeEach(module('myapp'));


function mockTemplate(templateRoute, tmpl) {
$templateCache.put(templateRoute, tmpl || templateRoute);
}

describe('On HTTP 401', function(){
var state;

beforeEach(function(){
mockTemplate.bind('components/splitapp/splitappView.html', '')
mockTemplate.bind('components/splitappMaster/splitappMasterView.html', '')
mockTemplate.bind('components/splitappDetail/splitappDetailView.html', '')
mockTemplate.bind('components/splitapp/offcanvasMenu.html', '')
})


it('Should go to login-state', inject(function($state, $location){
console.log($location.url())
$state.go("splitapp")
console.log($state.current)
expect($state.is('splitapp')).toEqual(true);
}))
})
});


After $state.go the state should be splitapp, which is defined correctly in the router config. Heres my testrunner output



LOG: ''
LOG: Object{name: '', url: '^', views: null, abstract: true}
Chrome 40.0.2214 (Mac OS X 10.10.2) Login: On HTTP 401 Should go to login-
state FAILED
Expected false to equal true.
Error: Expected false to equal true.


Thanks for your help.


C2338 compile error for a Microsoft Visual Studio unit test

I am receiving the following error when I attempt to compile a unit test in Visual Studio 2013:



Error 1 error C2338: Test writer must define specialization of ToString<Q* q> for your class class std::basic_string<wchar_t,struct std::char_traits<wchar_t>,class std::allocator<wchar_t> > __cdecl Microsoft::VisualStudio::CppUnitTestFramework::ToString<struct HINSTANCE__>(struct HINSTANCE__ *).



You can replicate the error by having a test method such as below:



const std::wstring moduleName = L"kernel32.dll";
const HMODULE expected = GetModuleHandle(moduleName.c_str());
Microsoft::VisualStudio::CppUnitTestFramework::Assert::AreEqual(expected, expected);


Does anyone know how I need to go about writing such a specialization of ToString?


Error: Could not resolve 'app.history' from state ' '

I am writing unit test which is using $stateProvider(the code is shown below for both the code and its test file).While executing this, it is giving error- "Error: Could not resolve 'app.history' from state ''".



$stateProvider
.state('app', { url: "/app", templateUrl: "pages/app/index.html", controller: function($state) {
$state.go('app.history');
}})
.state('app.history', { url: "/history", templateUrl: "pages/app/modules/History/partials/history.html"})


Unit test code -



describe("Unit tests for config.jst", function() {
var $rootScope, $injector, $state;
beforeEach(module('ui.router'));

beforeEach(inject(function(_$rootScope_, _$state_, _$injector_, $templateCache) {
$rootScope = _$rootScope_;
$injector = _$injector_;
$state = _$state_;

$templateCache.put("pages/app/index.html", "");
$templateCache.put("pages/app/modules/History/partials/history.html", "");
}));

describe("states", function() {
var state = "app.history";
it("verify state configuration", function() {
//var config = $state.get(state);
$state.go(state);
$rootScope.$digest();
//console.log($state);
expect($state.current.name).to.be.equal(state);
});
});
});

Does SonarQube do only static analysis?

I have been assigned a task which consists of maintaining quality of our project. I would like to start with static analysis of our code and unit test coverage. However, I am not sure whether SonarQube does only static analysis or it actually builds the project? I mean, how the unit tests are evaluated?


unittest ActionFilterAttribute OnActionExecutingAsync with moq

I'm trying to write a unittets for an asp.net webapi project. What I want to do is, to test a function with its corresponding filter. I setup the controller and the filter moq objects like this:



var filtermock= new Mock<MyActionFilterAttribute>();
filtermock.SetupGet(attr => attr.UserId).Returns(userName);
[...]
var controllermock = new Mock<MyController>();
var filtermock = new Mock<MyActionFilterAttribute>();


The unittest looks like this:



var controller = controllermock.Object;
var filter = filtermock.Object;
await filter.OnActionExecutingAsync(null, CancellationToken.None);
await controller.MyTestFunction();
await filter.OnActionExecutedAsync(null, CancellationToken.None);


The problem is, that the overridden functions OnActionExecutingAsync and OnActionExecudedAsync are not beeing called when i run/debug the test. I guess the baseclasses of ActionFilterAttribute are called? Could anyone give me a hint what I am doing wrong here?


Unit testing syntaxes / "paradigms" and their benefits/drawbacks?

I want to develop a unit testing framework for a new programming language and am thinking about how to design its interfaces.


The two most popular schemes seem to be JUnit-like (...Unit) and RSpec-like (e.g. RSpec, Jasmine).


What are the benefits and drawbacks? What other concepts would you recommend, and why?


Do you know of articles discussing the different styles and interfaces in general (as opposed to discussing specific implementations and their shortcomings)?


It is a "script" language, prototypal, single-inheritance with mix-ins, dynamically-typed, optionally parameter-/return-type-hinted, interpreted.


C# - EF 6 - MySQL: Error when calling method from unit test method

I have a solution, containing more than one project. Until now everything worked quiet well, even calling methods from referenced projects.


Now I tried to start using unit tests, so I could test different methods without running to whole overhead of GUI and so on.


I have a Form1 wich creates an instance of an entity model. This model is created from a MySQL Database. Instancing from Form1 does work without a problem, when I start the Form-Project. But when I reference everything in my unit test project, and just instance Form1, EF seems to have a problem. What I get is an instace of my entity model containing the following:




  • base {SELECT Extent1.ID, Extent1.extTicketID, Extent1.erstelltDatum, Extent1.erstelltMitarbeiterID, Extent1.firmenID, Extent1.ueberschrift, Extent1.inhalt, Extent1.typID, Extent1.gesichtetDatum, Extent1.gesichtetMitarbeiterID, Extent1.zugewiesenMitarbeiterID, Extent1.prioritaet, Extent1.statusID, Extent1.erledigtDatum, Extent1.erledigtMitarbeiterID, Extent1.projektNr, Extent1.projekt, Extent1.vonProjekt, Extent1.zeit_offen, Extent1.zeit_gesamt, Extent1.auftraggeberID, Extent1.reparatur, Extent1.linkTypID, Extent1.linkID, Extent1.erinnerung, Extent1.intervall, Extent1.intervallStunden, Extent1.reaktionszeit, Extent1.faellig, Extent1.zugewiesenAbteilungID, Extent1.kundeBenachrichtigen, Extent1.zubehoer, Extent1.niederlassung, Extent1.attention, Extent1.auftragDurchID, Extent1.gruppenID, Extent1.gesperrt, Extent1.pauschal, Extent1.pauschalBetrag, Extent1.eigen, Extent1.ogBetrag, Extent1.ogUserID, Extent1.deadline, Extent1.bezLTID, Extent1.bezLID, Extent1.IPFahrt, Extent1.instpauschID, Extent1.inhaltIntern, Extent1.wiedervorlage, Extent1.schliessen, Extent1.wvlText, Extent1.kostenstelleID, Extent1.phaseID, Extent1.wvlModus, Extent1.projektErstellerID, Extent1.aufwand_min, Extent1.kw, Extent1.laFlag, Extent1.bestellnummer, Extent1.laMaID FROM bug AS Extent1} System.Data.Entity.Infrastructure.DbQuery {System.Data.Entity.DbSet}



I have to mention, that I copied the connection string from that Form-Project App.config file.


I hope that you could give me a hint, on where the problem could be found. And I hope that the information above is sufficient.


Thanks in advance.


C# - EF 6 - MySQL: Error when calling method from unit test method

I have a solution, containing more than one project. Until now everything worked quiet well, even calling methods from referenced projects.


Now I tried to start using unit tests, so I could test different methods without running to whole overhead of GUI and so on.


I have a Form1 wich creates an instance of an entity model. This model is created from a MySQL Database. Instancing from Form1 does work without a problem, when I start the Form-Project. But when I reference everything in my unit test project, and just instance Form1, EF seems to have a problem. What I get is an instace of my entity model containing the following:




  • base {SELECT Extent1.ID, Extent1.extTicketID, Extent1.erstelltDatum, Extent1.erstelltMitarbeiterID, Extent1.firmenID, Extent1.ueberschrift, Extent1.inhalt, Extent1.typID, Extent1.gesichtetDatum, Extent1.gesichtetMitarbeiterID, Extent1.zugewiesenMitarbeiterID, Extent1.prioritaet, Extent1.statusID, Extent1.erledigtDatum, Extent1.erledigtMitarbeiterID, Extent1.projektNr, Extent1.projekt, Extent1.vonProjekt, Extent1.zeit_offen, Extent1.zeit_gesamt, Extent1.auftraggeberID, Extent1.reparatur, Extent1.linkTypID, Extent1.linkID, Extent1.erinnerung, Extent1.intervall, Extent1.intervallStunden, Extent1.reaktionszeit, Extent1.faellig, Extent1.zugewiesenAbteilungID, Extent1.kundeBenachrichtigen, Extent1.zubehoer, Extent1.niederlassung, Extent1.attention, Extent1.auftragDurchID, Extent1.gruppenID, Extent1.gesperrt, Extent1.pauschal, Extent1.pauschalBetrag, Extent1.eigen, Extent1.ogBetrag, Extent1.ogUserID, Extent1.deadline, Extent1.bezLTID, Extent1.bezLID, Extent1.IPFahrt, Extent1.instpauschID, Extent1.inhaltIntern, Extent1.wiedervorlage, Extent1.schliessen, Extent1.wvlText, Extent1.kostenstelleID, Extent1.phaseID, Extent1.wvlModus, Extent1.projektErstellerID, Extent1.aufwand_min, Extent1.kw, Extent1.laFlag, Extent1.bestellnummer, Extent1.laMaID FROM bug AS Extent1} System.Data.Entity.Infrastructure.DbQuery {System.Data.Entity.DbSet}



I have to mention, that I copied the connection string from that Form-Project App.config file.


I hope that you could give me a hint, on where the problem could be found. And I hope that the information above is sufficient.


Thanks in advance.


Do I need to create an instance of class to be tested with unittest?

Say I have:



class Calculator():
def divide (self, divident, divisor):
return divident/divisor`


And I want to test its divide method using Python 3.4 unittest module.


Does my code have to have instantiation of class to be able to test it? Ie, is the setUp method needed in the following test class:



class TestCalculator(unittest.TestCase):
def setUp(self):
self.calc = src.calculator.Calculator()
def test_divide_by_zero(self):
self.assertRaises(ZeroDivisionError, self.calc(0, 1))

How to test specific methods with PHPUnit

I need help with PHPUnit and some methods. How should you guys write tests in PHPUnit to reach a high code coverage for the following properties and methods?


I'm pretty new to PHPUnit and could need some help. I've just write some test cases for more basic code. This class generates flash messages for the end user, and store it in a session.


Extremely grateful for some help.



private $sessionKey = 'statusMessage';
private $messageTypes = ['info', 'error', 'success', 'warning']; // Message types.
private $session = null;
private $all = null;

public function __construct() {
if(isset($_SESSION[$this->sessionKey])) {
$this->fetch();
}
}

public function fetch() {
$this->all = $_SESSION[$this->sessionKey];
}

public function add($type = 'debug', $message) {
$statusMessage = ['type' => $type, 'message' => $message];

if (is_null($this->all)) {
$this->all = array();
}

array_push($this->all, $statusMessage);

$_SESSION[$this->sessionKey] = $this->all;
}

public function clear() {
$_SESSION[$this->sessionKey] = null;
$this->all = null;
}

public function html() {
$html = null;

if(is_null($this->all))
return $html;

foreach ($this->all as $message) {

$type = $message['type'];
$message = $message['message'];

$html .= "<div class='message-" . $type . "'>" . $message . "</div>";

}

$this->clear();

return $html;
}

mercredi 25 février 2015

Prepare static HTML files for jasmine-html-runner with grunt

I use grunt and usemin to manage my single app application (one HTML file with lots of JavaScript). The app talks via Ajax to a backend, so currently I use webdriver to test my app and the integration with the backend.


I also would like to run some frontend only tests while mocking out the backend. I have some background on jasmine, so my sollution would be to run them with the jasmine-html-runner.


Usemin is nice to prepare my HTML-Application-File for distribution, is there a grunt job that can do something similar for preparing to run tests with jasmine? I need something that copies my index.html and source files to a folder called test and then includes the test files in the head block of the index.html in the test folder (like usemin does it with the minified stuff in the dist folder).


Anyone has some other ideas how I can run jasmine tests on a single app application?


Thanks in advance!


How do I organize structure of my testSuite using beforeClass, afterClass, afterSuite in testNG?

I m confused about using annotations @beforeClass,afterClass, beforeSuite,afterSuite in testNG.


I understand test structure below:



package mytestNG.learning.it ;

public class sample_not_working {
@Test
//take action - click on links, input data etc
public void main() {
}
@BeforeMethod
//do stuff like setup browser etc
public void beforeMethod() {
}

@AfterMethod
//close browser
public void afterMethod() {
}
}


But what do you do in beforeclass,afterClass and test? What file is it? Is it a class that runs other classes?


Next, afterSuite, beforeSuite and @test :



public class sample_not_working {
@Test

public void main() {
//WHAT KINDA CODE YOU PUT HERE?
}
@BeforeSuite
//WHAT KINDA CODE YOU PUT HERE?
public void beforeMethod() {
}
@AfterSuite
public void afterMethod() {
//WHAT KINDA CODE YOU PUT HERE?
}
}


My question is about semantics, the meaning, not actual code. I read the testNG docs - did not help.


Magento extension development - testing multiple config.xml files - values being cached?

I am writing a test framework for a Magento extension I'm building. The extension has a lot of configuration values stored in etc/config.xml under <global><default>. The test framework instantiates an extension model and runs one of its methods. However, first it copies a config.xml file to /etc/config.xml. The idea is that the model is instantiated with a different config.xml every time, to test various configurations. The test framework loops through half a dozen different config.xml files.


The problem - even if I re-bootstrap Magento, the extension model always instantiates with the config.xml data from whatever file was present when the routine was started. I can see that the etc/config.xml file is indeed being changed on every iteration, and that the changes are showing up if I dump Mage::getConfig() on each iteration. It's like the extension is caching its config values on a per-run basis. I'm executing the test file via PHP CLI.


Does anyone have any ideas on this one? I'm stumped. Thanks for reading.


Intercept (mock) http requests on Node.js and browser

I don't know if I'm asking too much, but I want a library to intercept/mock HTTP requests in a isomorphic/progressive way (i.e. works the same on node.js and browser) for unit/behavior tests. Is there such a thing?


I'm building a client for an API and it must work both on server and browser. Nock is great but only for Node (as it doesn't work with Browserify, I tried).


I could just mock the library I'll use for requests (such as superagent or rest). That, however, would lock me in to some library and would require a major refactor of the tests.


My wish is to avoid duplication of tests and avoid the most checks for environment as possible. And to be agnostic of implementation, hence my need to mock the requests.


I'm almost considering making one myself (or at least a glue between two libraries).


Grails spock Testing with File class

I have a method that create a file based in a content copied from an another file. Like below



private cloneBaseFile(fileName, ddi, ddd){


def config = grailsApplication.config.kicksim.fileConstants


String baseFileContents = new File(config.baseFile).getText('UTF-8')

def help = handleString("${ddd}.${ddi}")

baseFileContents = baseFileContents.replaceAll("DDDDDI", help);

def f1= new File(fileName)
f1 << baseFileContents

return fileName
}


I'd like to know how to unit test it.


View Controller TDD

I am trying to add some unit tests to my project to test view controllers. However I seem to be having problems with seemingly simple things. I have created a sample project which I will refer to. http://ift.tt/1wfUeIB


The sample contains a UINavigationController as the initial view controller. The root view controller of the UINavigationController is FirstViewController. There is a button on FirstViewController that segues to SecondViewController. In SecondViewController there is an empty textfield.


The two tests I am trying to add are:

1) Check button title in FirstViewController is "Next Screen".

2) Check textfield in SecondViewController is empty, "".


I have heard reports of adding your swift files to both the main target and the test target is not good practice. But rather it is better to make whatever you want to access in your tests public and import the main target into the tests. So that is what I have done. (I have also set the "Defines Module" for the main target to YES as that is what I have read in a few articles aswell).


In FirstViewControllerTests I have instantiated the first view controller with the following:



var viewController: FirstViewController!

override func setUp() {
let storyboard = UIStoryboard(name: "Main", bundle: NSBundle(forClass: self.dynamicType))
let navigationController = storyboard.instantiateInitialViewController() as UINavigationController
viewController = navigationController.topViewController as FirstViewController
viewController.viewDidLoad()
}


And I have added the test:



func testCheckButtonHasTextNextScreen() {
XCTAssertEqual(viewController.button.currentTitle!, "Next Screen", "Button should say Next Screen")
}


Similarly, for SecondViewControllerTest, I have set it up using:



var secondViewController:SecondViewController!

override func setUp() {
let storyboard = UIStoryboard(name: "Main", bundle: NSBundle(forClass: self.dynamicType))
let navigationController = storyboard.instantiateInitialViewController() as UINavigationController
let firstviewController = navigationController.topViewController as FirstViewController
firstviewController.performSegueWithIdentifier("FirstToSecond", sender: nil)
secondViewController = navigationController.topViewController as SecondViewController
secondViewController.viewDidLoad()
}


And the test:



func testTextFieldIsBlank() {
XCTAssertEqual(secondViewController.textField.text, "", "Nothing in textfield")
}


They both fail and I am not too sure as to why. My suspicion is that the way I am instantiating the view controllers is not correct. Is the best way to instantiate the view controllers is to use the storyboard (just like it would if it were to run in real life)? Or is it acceptable to be instantiated via:



var viewController = FirstViewController()


What are you guys' experience with TDD and view controllers in swift?


I am using Swift with XCode 6.1.1.


Thanks in advance.


Don't put test assemblies in output on TFS

Is there any way not to put test assemblies in the output? In other words I'd like to run afterbuild tests but I don't need test assemblies in output.


Unit testing Angular bootstrapping code

I have an app that uses manual bootstrapping, with a chunk of code that essentially looks like this:



(function() {
'use strict';
angular.element( document ).ready( function () {

function fetchLabels( sLang, labelPath ) {
// retrieves json files, builds an object and injects a constant into an angular module
}

function rMerge( oDestination, oSource ) {
// recursive deep merge function
}

function bootstrapApplication() {
angular.element( document ).ready( function () {
angular.bootstrap( document, [ 'dms.webui' ] );
});
}

fetchLabels( 'en_AU' ).then( bootstrapApplication );

}


It works great - essentially fetches two json files, combines them and injects the result as a constant, then bootstraps the app.


My question is how to unit test these functions? I want to write something to test the fetchLabels() and rMerge() methods, but I'm not sure how to go about it. My first thought was separate the methods out into a service and use it that way, but I wasn't sure if I could actually invoke my own service this way before I've even bootstrapped the application?


Otherwise, can anyone suggest a way to separate out these methods into something standalone that I can test more readily?


Not sure where to start with unit testing angular directive

I am trying to unit test the following directive. How ever I'm not sure in this case how or even what to test.


The part I am testing is the link function to my knowledge I can only test the outcome of calling watch. In the case below is there anything that I can test?


I originally wanted to make sure modalInstance was called but im not sure this is possible.


The directive: angular.module('pb.roles.directives') .directive('pbRolesModal', ['$modal', 'pbRoles', function ($modal, pbRoles) {



return {
restrict: 'A',
link: function (scope, element, attrs) {
scope.$watch(attrs.pbRolesModal, function (entity) {
if (entity) {
element.click(function () {
var modalInstance = $modal.open({
templateUrl: '/app/roles/views/_peopleRoles.html',
controller: 'RolesController',
resolve: {
entityId: function () {
return entity.entityId;
},
enabledRoles: function () {
return entity.enabledRoles || 0;
}
}
});
});
}
});
}
};

}]);


My setup so far:



describe('pbRolesModal', function () {

beforeEach(module(function ($provide) {
$provide.constant('organizationService', function () { });
$provide.service('activeProfile', function () { });
}));

beforeEach(module('pb.roles'));
beforeEach(module('pb.roles.directives'));
beforeEach(module('ui.router'));
beforeEach(module('ui.bootstrap'));

var compile, scope, element, isolate;

var html = '<button data-pb-role-auth data-pb-granted-roles="campaign.personRoles" data-pb-keep="Admin,Manager" data-ng-disabled="!campaign" data-pb-roles-modal="campaign" data-ng-class="{ \'btn-sm\': isHeaderMin }" class="btn btn-default-alt"><span class="fa fa-users"></span></button>';


beforeEach(inject(function ($compile, $rootScope, $q, $injector) {
compile = $compile
scope = $rootScope.$new();

$httpBackend = $injector.get('$httpBackend');
$httpBackend.whenGET('/app/roles/views/_peopleRoles.html').respond(200, '');

element = compile(html)(scope);
//$httpBackend.flush();
scope.$digest();
}));

describe('$watch', function () {

it('', function () {

});

});


});

Watchkit Extension Test Class - Bad Access Issue

I set up a test target for my Watchkit extension by following the steps mentioned in the 'How can I unit test my WatchKit extension?' section in this link.


Then I imported a controller class from the Extension into my test class and tried to create an object for it. This is throwing a EXC_BAD_ACCESS error.


Import statement:

#import "NotificationController.h"


Creating an object:

NotificationController *controller = [[NotificationController alloc] init];


The controller class i imported is a sub class of WKUserNotificationInterfaceController. Could someone tell me what I am doing wrong?


Thanks!


Mocking Java objects in my target where target has no setters available

The situation is that I have to write Unit test for one of Java class. That java class in the constructor creates object of another class(3rd party), no so setter available for setting the 3rd party class.


I do not have the 3rd party class available with me, so would like to mock it. So is it possible by Mockito or any other framework or any other suggestion.


One option I see is to self create that 3rd party class (with same package info) and provide called functions returning whatever I wish and provide class path where these mocked will exist.


AngularJS testing $resource - flush giving error

I have a factory



angular.module('RepServices', [ 'ngResource' ]).factory('Rep',
function($resource) {
return $resource('rep.do', {}, {
get : {
method : 'GET',
params : {
action : "fetchRep"
},
isArray : false,
responseType : "text"
}
});
});


and have created a test



describe('RepService test', function () {
var httpBackend;
var repService;
var repResponseXML = '<RepEntity><active>false</active><repCode>C326</repCode></RepEntity>';

beforeEach(module('rifApp'));

beforeEach(inject(
function ($injector) {
httpBackend = $injector.get('$httpBackend');
repService = $injector.get('Rep');// Rep is name of the factory
})
);

describe('fetchRep', function () {
it ('should call the RepServices to fetchRep', function () {
var mockData = repResponseXML;
var url = 'rep.do';

httpBackend.expectGET(url).respond(mockData);

var response =
repService.get({repCode: 'C213', subfirm: '001'},
function(httpResponse) {console.log("httpResponse: " + httpResponse);},
function(httpErrorResponse) {});

console.log(response);
console.log("Promise: ");
console.log(response.$promise);


response.$promise.then(function() {console.log("callback");}, function() {console.log("errback");}, function() {console.log("progressback"); });


httpBackend.flush()
});

});

});


Getting error when it calls flush (see error below). If I remove the flush then Karma reports success but it does not appear to be returning what I have called mockData



Error: [jqLite:nosel] Looking up elements via selectors is not supported by jqLite! See: http://ift.tt/13DDhJi
http://ift.tt/1DqGHmy
at JQLite (C:/Users/D532335/Projects/Affirm/Trunk/RIFWeb/WebContent/javascript/lib/angular.js:2365)
.............

Creating a Fake DOM to test in a JavaScript test suite

The problem




  • TypeError: Cannot set property 'innerHTML' of null



The question


I am using Jest to handle my JavaScript's unit test and it brings embedded jsdom, which should to handle the DOM-related subjects.


This is my fragment of test: jest.dontMock ('fs');



var markup = require ('fs')
.readFileSync(__dirname + '/markup/index.html')
.toString();

describe ('my app', function () {
it ('should initialize adding <h1> to the DOM', function () {
document.documentElement.innerHtml = markup;

app.start();

expect(app.DOM).toEqual('<h1>Hello world!</h1>');
});
});


The implementation of .start() has within:



document.querySelector('h1').innerHTML = 'Hello world';


When running the library in browser, it works well. But when testing via CLI, it don't.


Diagnostics


Basically, I tried the following:



it ('should initialize adding <h1> to the DOM', function () {
document.documentElement.innerHtml = markup;

console.log(document.querySelector('h1'));

// [...]
});


And the console.log() outputs 'null' — seems like the markup I created isn't being added to the "DOM" that jsdom created.


This is the content of /markup/index.html, also the same as markup variable:



<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Linguisticjs Markup Test</title>
</head>
<body>
<h1>Bonjour le monde !</h1>
<h3>Comment ça va ?</h3>
</body>
</html>


Any ideas?