The project I'm starting on takes in requests to check another service every X minutes (which is being continually tuned for the user's request--we're trying to grab the maximal number of items from the other service for each request and the other service has a hard limit of, say, 1000 items it will send back as a result to a request--so we want to try to converge upon a time delta for hitting this other server, for each specific query, whose results are as close as we can get to 1000 without going over) until a designated date to stop doing that, or until the user sends a "delete job" request.
How would practitioners of Test-Driven Development craft the initial functional tests to ensure the server is doing what it says it's doing?
One user story is something like this:
-
User submits job with no defined end date.
-
User receives an ack letting him know the job was received.
-
User receives some initial results.
-
User continues to receive results over the next day, no more than 999 in a batch.
-
User sends delete job message.
-
User receives an ack.
-
No more data is sent to User for this job.
I'm notably stuck on how I should go about implementing a test for 4. My thought was that in the tests I could mess with the config file to start it's time delta at some small unit of time (like 10 ms), and mock the external server's results such that it should expect to need to check every 20ms or so in order to get all the results. Then I could call it good after a couple seconds of that, and go about deleting the job. Doing it this way means that I have to have a separate server (the "User"), and I have to set up the main server in a testing-state, so that it uses a different config and the functions that call the external server are pointed toward mocks instead of making calls to the real service.
Do you have a better way to test this? Do you have a pointer to some open source test suites (ideally in Python) that deal with problems like this?
Aucun commentaire:
Enregistrer un commentaire