Different OSes have different behavior when trying to fopen/fseek/ftell
a folder. On Linux, some systems return 0 size, some systems return an
error, and some systems return LONG_MAX. LONG_MAX is particularly
problematic because that causes spurious OOMs under address sanitizer.
Using fstat manually cleans this up, however it introduces a new
dependency on platform specific headers that we didn't have before, and
also has unclear behavior on 64-bit systems wrt 32-bit sizes which will
need to be tested further as I'm not certain if the behavior needs to be
special-cased only for MSVC/MinGW, which are currently not handled by
this path (unless MinGW defines __unix__...)
This is the same fix as #497, but we're using auto_deleter instead
because if allocation function throws, we can't rely on an explicit call
to deallocate.
Comes along with two tests that validate the behavior.
Previously when copying the allocator state we would copy an incorrect
root pointer into the document's current state; while this had a minimal
impact on the allocation state due to the fact that any new allocation
would need to create a new page, this used a potentially stale field of
the moved document when setting up new pages, which could create issues
in future uses of the pages.
This change fixes the core problem and also removes the use of the
_root->allocator from allocate_page since it's not clear why we need it
there in the first place.
The behavior on Linux is very different between kernel versions, and it
triggers an unexpected OOM during sanitizer runs because somehow the
size is reported to be LONG_MAX. It's not clear that it helps us cover
any paths we don't cover otherwise - it would be nice to be able to test
failing to load a multi-gigabyte file on a 32-bit system, but we can't
do this easily atm anyway.
On some Debian systems it looks like we *can* open the current folder as
a file and read its contents, but parsing the result produces an empty
document. We now handle this case as well.
Fixes#225.
Apparently at some point OSX behavior when reading /dev/tty switched
from "can't open the file" to "the file can be opened and 0 bytes can be
read from it" which generates a wrong error and doesn't exercise the
code path we care about.
We now check that appending a child to a moved document performs no
allocations - this is already the case, but if we neglected to copy the
allocator state this test would fail.
These just verify that move ctor/assignment operator work as expected in
simple cases - there are a number of ways in which the internal
structure can be incorrect...
These tests simulate various error conditions when reading data from
streams - seeks failing in seekable streams, underflow throwing an
exception causing read to set badbit, etc.
This change also adjusts memory thresholds to cause a reliable out of
memory during construction of a final buffer for non-seekable streams.
These functions were deprecated via comments in 1.5 but never got the
deprecated attribute; now is the time!
Using deprecated functions produces a warning; to silence it, this
change moves the relevant tests to a separate translation unit that has
deprecation disabled.
New tests try to load a folder as an XML document, and a device. Both
are intended to exercise some otherwise non-hittable error paths in
load_file implementation.
gcov -b surfaced many lines with partial coverage, where branch is only
ever taken or not taken, or one of the expressions in a complex
conditional is always either true or false. This change adds a series of
tests (mostly focusing on XPath) to reduce the number of partially
covered lines.
When using format_raw the space in the empty tag (<node />) is the only
character that does not have to be there; so format_raw almost results in
a minimal XML but not quite.
It's pretty unlikely that this is crucial for any users - the formatting
change should be benign, and it's better to improve format_raw than to add
yet another flag.
Fixes#87.
If an out of memory error happens in load_file there's a danger of leaking
the FILE object. Since there is a limited supply of the objects we can easily
test that the leak does not happen.
Previously there was no guarantee that the tests that check for out of memory
handling behavior are actually correct - e.g. that they correctly simulate out
of memory conditions.
Now every simulated out of memory condition has to be "guarded" using
CHECK_ALLOC_FAIL. It makes sure that every piece of code that is supposed to
cause out-of-memory does so, and that no other code runs out of memory
unnoticed.
data/truncation.xml was corrupted at some point and was not actually valid.
Fix the file and make the test fail if we can't parse truncation.xml at all.
Unfortunately, standard headers on MinGW32 insist on undefining off64_t
and _wfopen extensions if __STRICT_ANSI__ is true (e.g. C++11 mode). This
leads to compilation errors since b7a1fec started to use _wfopen in strict
mode. That change erroneously checked GCC version - however, the version
itself is irrelevant; the actual criteria is whether mingw64 runtime is
used.
off64_t is not useful on MinGW32 since we only need it to open large files
on 64-bit platforms; unfortunately, the lack of _wfopen means we won't be
able to support wide-char paths on Windows for MinGW32.
Fixes#24.
Since MinGW 4.5 does not define these functions if __STRICT_ANSI__ is defined
(in case of _wfopen it defines it inconsistently between stdio.h and wchar.h)
use the baseline functions for MinGW 4.5 and earlier.
Fixes#23.
This should completely eliminate the confusion between load and load_file.
Of course, for compatibility reasons we have to preserve the old variant -
it will be deprecated in a future version and subsequently removed.