mardi 28 juin 2016

Unit test function that enforces specific decimal precision level

I'm writing software for counting preferential multi-seat elections. One common requirement is fixed precision. This means that all math operations must be done on values with a fixed specified precision and the result must have the same precision. Fixed precision means a set number of digits after the decimal. Any digits after that are discarded.

So if we assume 5 digits of precision:

    42/139

becomes:

    42.00000/139.00000 = 0.30215

I'm having problems writing unit tests for this. So far I've written these two tests for big and small numbers.

    public void TestPrecisionBig()
    {
        PRECISION = 5;
        decimal d = Precision(1987.7845263487169386183643876m);
        Assert.That(d == 1987.78452m);
    }

    public void TestPrecisionSmall()
    {
        PRECISION = 5;
        decimal d = Precision(42);
        Assert.That(d == 42.00000m);
    }

But it evaluates to 42 == 42.00000m Not what I want.

How do I test this? I guess I could do a d.ToString, but would that be a good "proper" test?

Aucun commentaire:

Enregistrer un commentaire