This change replaces xpath_node_set single element storage with a
single-element array in hopes that this would silence Coverity false
positive about getting a singleton pointer.
Additionally, it refactors _assign member to unify small and large
buffer codepaths since they are basically identical.
Fixes#233 (hopefully)
Intel compiler by default sets flush-to-zero flags which causes our
denorm test to produce 0.0. So make sure that denorms work on FPU before
testing the string output.
Fixes#218.
* Visual Studio Natvis visualization
* Changed string format to remove separate natvis file for wide character mode
* Display any node type with name and value if any of them are available
On some Debian systems it looks like we *can* open the current folder as
a file and read its contents, but parsing the result produces an empty
document. We now handle this case as well.
Fixes#225.
This makes sure the contents of tests/data/ folder does not go through
newline conversion, which breaks tests that rely on some files having LF
and some files having CR+LF.
Fixes#222.
It looks like zipfile module by default uses the permission mask 0,
which after unpacking on Unix-based systems leads to the files being
inaccessible.
We now explicitly set file mask to rw-r--r-- to match .tar.gz defaults.
Fixes#217.
- Up to now, the libdir was hardcoded to "lib" inside pugixml.pc and
the install directory of pugixml.pc was "lib/pkgconfig"
- Adds support for lib and lib64 by using CMAKE_INSTALL_LIBDIR variable
This is implicitly true due to the following section, but that was
written before C++11 so this does deserve a special mention in ranged
for section as well.
Fixes#210.
This setup can interfere with existing workflows in two ways:
- If the target application used CMake and configured custom postfixes,
this change would override them
- If the target application did *not* use CMake, it'd have to abide by
these conventions even if the target configuration used is unexpected -
for example, the default "preferred" configuration is frequently
RelWithDebugInfo, not Release, which now has a postfix.
Fixes#198.
There's really never a reason to *not* want this installed. If an option
is needed to specify installing in a versioned subdirectory, this option
should be explicitly described rather than hidden in something else.
As an added bonus, this makes the CMake install code slightly *less*
complicated.
gcc-8 produces "attribute directive ignored" warning for
no_sanitize("unsigned-integer-overflow"); at some point gcc will
introduce integer sanitizer support and we'll have to do this all over
again but for now just don't emit the attribute.
Mention ubsan fixes; these fixes probably fix compact mode on some
64-bit architecture where unaligned pointer reads aren't valid as well
but it's probably not very relevant...
Several tests got the buffer size wrong when sizeof(char_t)>1, and one
test didn't meet the carefully tuned allocation criteria under compact
mode due to the hash table usage and had to be changed a bit.
We were using << compact_alignment_log2 instead of * compact_alignment
for symmetry with the encoding where >> is crucial to keep code fast and
round to negative infinity.
For decoding, the results are the same and any reasonable compiler
should convert *4 into <<2 so just use a multiplication - that doesn't
trigger UB on negative numbers.
We were using allocate_memory to allocate struct xml_extra_buffer that
contains pointers; with compact mode, this allocation can be misaligned
by 4b with 8b pointers; fix this by manually realigning the pointer.
We were misaligning document data on 64-bit platforms by placing 8b
pointers at 4b offsets; fix this by reserving a full pointer worth of
bytes for page marker.
Define noexcept using _MSC_VER instead of _MSC_FULL_VER (first release
of MSVC 2015 should have it), remove redundant PUGIXML_HAS_NOEXCEPT and
define PUGIXML_NOEXCEPT_IF_NOT_COMPACT in terms of PUGIXML_NOEXCEPT.
* Adds a macro definition to be able to use noexcept with supporting compilers
* Adds noexcept specifier to move special members of xpath_node_set, xpath_variable_set and xpath_query, but not of xml_document as it has a throwing implementation
Texas Instruments compiler produces this warning for unused template
member functions:
"pugixml.cpp", line 253: warning #179-D: function
"pugi::impl::<unnamed>::auto_deleter<T>::release [with
T=pugi::impl::<unnamed>::xml_stream_chunk<char>]" was declared but
never referenced
As far as I can tell, this is a compiler issue - these functions should
not be instantiated in the first place; while it's possible to rework
the code to work around this, the changes would be fragile. It seems
best to just disable this warning - we've seen something similar on SNC
(which appears to use the same frontend!..).
Fixes#182.
It looks like there are several cases where this might happen:
- In some MinGW distributions, the LLONG_MIN/etc defines are guarded
with:
#if !defined(__STRICT_ANSI__) && defined(__GNUC__)
Which means that you don't get them in strict ANSI mode. The previous
workaround was specifically targeted towards this.
- In some GCC distributions (notably GCC 6.3.0 in some configurations),
LLONG_MIN/etc. defines are guarded with:
#if (defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L)
But __STDC_VERSION__ isn't defined as C99 even if you use -std=c++14 -
which is probably technically valid, but not useful.
To work around this, redefine the symbols whenever we are building with
GCC and we need them and they aren't defined - doing this is better than
not building. Instead of hard-coding the constants, use GCC-specific
__LONG_LONG_MAX__ to compute them.
Fixes#181.
Apparently at some point OSX behavior when reading /dev/tty switched
from "can't open the file" to "the file can be opened and 0 bytes can be
read from it" which generates a wrong error and doesn't exercise the
code path we care about.
The static_buffer optimization seems to come from the time where the
on-heap buffer was allocated using global memory operations. At this
point the temporary buffer and temporary string storage all come from
the evaluation stack (that can be partially allocated on heap...), so
the extra logic isn't relevant for performance.
This change implements move ctor and assign support for xml_document.
All node handles remain valid after the move and point to the new document; the only exception is the document node itself (that remains unmoved).
Move is O(document size) in theory because it needs to relocate immediate document children (there is just one in conformant documents) and all memory pages; in practice the memory pages only need the header adjusted, which is ~0.1% of the actual data size.
Move requires no allocations in general, except when using compact mode where some moves need to grow the hash table which can fail (throw).
Fixes#104
In compact mode, we currently can not support zero-allocation moves
since some pointer assignments required during the move need to allocate
hash table slots.
This is mostly applicable to xml_document_struct::first_child, since the
pointer to this element is used as a hash table key, but there are some
contrived cases where parents of root's children need a hash slot and
didn't have it before.
These cases can be fixed by changing the compact encoding to be a bit
more move friendly, but for now it's easier to handle the error and
throw/return during move.
When this happens, the source document doesn't change.
This makes sure that MSVC shared library build actually exports all the
needed symbols and generates import table.
Somehow, this is actually enough to make pugixml link as a DLL - there's
no need to specify __declspec(dllimport) even though pugixml exports
classes via DLL.
Fixes#113.
The old fuzzer location is deprecated; this also makes it almost trivial
to fuzz, provided that the clang is set up correctly... on Ubuntu 17.10,
a command sequence like this works now:
sudo apt install clang-5.0
sudo apt install libfuzzer-5.0
sudo cp /usr/lib/llvm-5.0/lib/libFuzzer.a /usr/lib/libLLVMFuzzer.a
CXX=clang++-5.0 make fuzz_parse
After move some nodes in the hash table can have keys that point to
other; this makes the table somewhat larger but this does not impact
correctness.
The reason is that for us to access a key in the hash table, there
should be a compact_pointer/string object with the state indicating that
it is stored in a hash table, and with the address matching the key. For
this to happen, we had to have put this object into this state which
would mean that we'd overwrite the hash entry with the new, correct
value.
When nodes/pages are being removed, we do not clean up keys from the
hash table - it's safe for the same reason, and thus move doesn't
introduce additional contracts here.
We now check that appending a child to a moved document performs no
allocations - this is already the case, but if we neglected to copy the
allocator state this test would fail.
These just verify that move ctor/assignment operator work as expected in
simple cases - there are a number of ways in which the internal
structure can be incorrect...
This change implements the initial version of move construction and
assignment support for documents.
When moving a document to another document, we always make sure move
target is in "clean" state (empty document), and proceed by relocating
all structures in the most efficient way possible.
Complications arise from the fact that the root (document) node is
embedded into xml_document object, so all pointers to it have to change;
this includes parent pointers of all first-level children as well as
allocator pointers in all memory pages and previous pointer in the first
on-heap memory page.
Additionally, compact mode makes everything even more complicated
because some of the pointers we need to update are stored in the hash
table (in fact, document first_child pointer is very likely to be there;
some parent pointers in first-level children will be using
compact_shared_parent but some won't be) which requires allocating a new
hash table which can fail.
Some details of this process are not fully fleshed out, especially for
compact mode; and this definitely requires many tests.
It has always been the case that pugixml does not perform Unicode
validation or name/tag Unicode character class validation, but it wasn't
very obvious from documentation.
Fixes#162
We support Latin-1 and automatically detect it by parsing the encoding
from document declaration; both of these were omitted from the
description of the automatic detection.
Additionally, the description has been rewritten to be more concise and
a bit more abstract - there's no need to specify the algorithm precisely
here.
Fixes#158.
Using LTCG restricts the resulting .lib files to a specific compiler
version, causing version conflicts when the compiler gets updated
without changing the toolset version. VS2017 now has two incompatible
compilers, 15.0 and 15.3, both of which use toolset v141...
These tests simulate various error conditions when reading data from
streams - seeks failing in seekable streams, underflow throwing an
exception causing read to set badbit, etc.
This change also adjusts memory thresholds to cause a reliable out of
memory during construction of a final buffer for non-seekable streams.
It's not clear whether we still need PUGI__MSVC_CRT_VERSION, but it's
more consistent for now to use it for _snprintf_s since this is relying
on a CRT extension, not on a compiler feature.
These functions were deprecated via comments in 1.5 but never got the
deprecated attribute; now is the time!
Using deprecated functions produces a warning; to silence it, this
change moves the relevant tests to a separate translation unit that has
deprecation disabled.
Unify build paths in all MSBuild VS projects and extract common build
logic into functions.
Note that this change changes both VS2010 and VS2013 projects to have
more predictable output paths and fixed output file name (pugixml).
We'd like to build pugixml with both static & dynamic CRT and put it
all in one NuGet package.
CoApp sort of allows us to do this via dynamic/static pivots, but it
does not let us customize the names of the pivots and additionally has
some bugs with the project setup. Their project modifications are also
much more complicated - really, at this point we should do this
ourselves.
Create a simple native NuGet package with Linkage setting that picks the
right library, and package all libraries appropriately. Note that we use
the unified path syntax to make it simple to just get the right .lib
file from the toolset/platform/configuration/linkage combo.
The macro only works correctly when the input argument is an array with
a statically known size - pointers or arrays decayed to pointers won't
work silently.
While this is unlikely to surface issues that aren't caught in
tests/code review, use _countof for MSVC to prevent such code from
compiling.