Instead of trying to detect if we can safely use random shuffle simply reimplement it ourselves.
The quality of the RNG is not essential for these tests.
We now have two tests: one tests behavior when we run out of space when
appending the node set (in which case the append fails), another one
tests behavior when we run out of space when filtering the node set (in
which case the set still contains redundant data).
This test is very sensitive to the particular implementation of union
aggregation; for now lets disable this.
We need a more robust way to test union allocation failures.
We had a few places in test code and library source where we used an
implicit float->double cast; while it should preserve the value exactly,
gcc/clang implement this warning to make sure uses of double are intentional.
This change also adds the warning to Makefile to make sure we don't
regress on this warning.
Fixes#243.
Intel compiler by default sets flush-to-zero flags which causes our
denorm test to produce 0.0. So make sure that denorms work on FPU before
testing the string output.
Fixes#218.
Add memory allocation failure test for concact with a very large list
and make sure we have every single axis covered with and without a
predicate, with and without a previous step.
Currently this test has very large runtime and relies on the fact that
the first memory allocation error causes the test to terminate. This
does not work with new behavior of running the query through and
reporting the error at the end, so make the runtime reasonable but still
generate enough memory to blow past the budget.
gcov -b surfaced many lines with partial coverage, where branch is only
ever taken or not taken, or one of the expressions in a complex
conditional is always either true or false. This change adds a series of
tests (mostly focusing on XPath) to reduce the number of partially
covered lines.
Previously there was no guarantee that the tests that check for out of memory
handling behavior are actually correct - e.g. that they correctly simulate out
of memory conditions.
Now every simulated out of memory condition has to be "guarded" using
CHECK_ALLOC_FAIL. It makes sure that every piece of code that is supposed to
cause out-of-memory does so, and that no other code runs out of memory
unnoticed.
Some compilers don't handle NaNs properly.
Some compilers don't implement fmod in a IEEE-compatible way.
Some compilers have exception handling codegen bugs (DMC...).
This should completely eliminate the confusion between load and load_file.
Of course, for compatibility reasons we have to preserve the old variant -
it will be deprecated in a future version and subsequently removed.
Sometimes when evaluating the node set we don't need the entire set and
only need the first element in docorder or any element. In the absence of
iterator support we can still use this information to short-circuit
traversals.
This does not have any effect on straightforward node collection queries,
but frequently improves performance of complex queries with predicates
etc. XMark benchmark gets 15x faster with some queries enjoying 100x
speedup on 10 Mb dataset due to a significant complexity improvement.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1067 99668b35-9821-0410-8761-19e4c4f06640
When allocating new pages, make sure that the page has at least 1/4 of the
base page size free. This makes sure that we can do small allocations after
big allocations (i.e. huge node lists) without doing a heap alloc.
This is important because XPath stack code always reclaims extra pages after
evaluating sub-expressions, so allocating a small chunk of memory and then
rolling the state back is a common case (filtering a node list using a
predicate usually does this).
A better solution involves smarter allocation rollback strategy, but the
implemented solution is simple and practical.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@999 99668b35-9821-0410-8761-19e4c4f06640