Extra argument 'hint' is used to start the attribute lookup; if the attribute
is not found the lookup is restarted from the beginning of the attriubte list.
This allows to optimize attribute lookups if you need to get many attributes
from the node and can make assumptions about the likely ordering. The code is
correct regardless of the order, but it is faster than using vanilla lookups
if the order matches the calling order.
Fixes#30.
Address sanitizer can detect underflows so we don't really need the custom
allocator.
Additionally, custom allocator can return memory that is not pointer-aligned;
this causes undefined behavior sanitizer to complain.
Implement compact mode.
This introduces a new storage mode that dramatically reduces node size at some performance cost.
The mode is enabled by defining PUGIXML_COMPACT. This does not change API/ABI - all existing functionality still works.
The pointers are stored using delta encoding and bytes, with some additional tricks to make encoding more optimal for e.g. parent pointer and string pointers. Since the node is fixed size, we have to fall back to a hash table if the pointer does not fit. Thus all DOM operations still have amortized complexity - constant number of operations if you don't need the hash table and amortized constant if you do.
Aside from some performance loss (which is inevitable since decoding takes time), the only other caveat is that we can't remove entries from the hash table - so in some edge cases with a lot of node removals the peak memory consumption can grow indefinitely. In theory we can implement this later; it's unclear that this is useful at this point.
The resulting node/attribute sizes are as follows:
non-compact node: 28b 32-bit, 56b 64-bit
compact node: 12b 32/64-bit
non-compact attribute: 20b 32-bit, 40b 64-bit
compact attribute: 8b 32/64-bit
Now compact_string matches compact_pointer_parent.
Turns out PUGI__UNLIKELY is good at reordering conditions but usually does not
really affect performance. Since MSVC should treat "if" branches as taken and
does not support branch probabilities, don't use them if we don't need to.
Instead of checking if the object being removed allocated a marker, mark the
marker block as deleted immediately upon allocation. This simplifies the logic
and prevents extra markers from being inserted if we allocate/deallocate the
same node indefinitely.
Also change marker pointer type to uint32_t*.
When we deallocate nodes/attributes that allocated the marker we have to
adjust the size accordingly, and dismiss the marker in case it gets
overwritten with something else...
This temporarily increases the node size to 16 bytes - we'll bring it back.
It allows us to remove the horrible node_pi hack and to reduce the amount of
changes against master. This comes at the price of not decreasing basline
xml_node_struct size.
The compact xml_node_struct is also increased by this change but a followup
change will reduce *both* xml_attribute_struct and xml_node_struct (to 8/12
bytes).
We used this in two cases - to get the page pointer and to test flags.
We now use PUGI__GETPAGE for getting the page pointer and operator& to test
flags - this makes getting node type significantly faster since it does not
require page pointer reconstruction.
Clarify the offset applied when encoding the pointer difference.
Make decoding diff slightly more clear - no effect on performance.
Adjust branch weighting in compact_string encoding - 0.5% faster.
Use uint16_t in compact_pointer_parent - 2% faster.
Make sure compact_hash_table::rehash() is not inlined - that way reserve() is
inlined so the fast path has no extra function calls.
Also use subtraction instead of multiplication when checking capacity.
xpath_query, xpath_node_set and xpath_variable_set are now moveable.
This is a nice performance optimization for variable/node sets, and enables
storing xpath_query in containers without using pointers (it's only possible
now since the query is not copyable).
xpath_variable_set is essentially an associative container; it's about time it
became copyable.
Implementation is slightly tricky due to out of memory handling. Both copy ctor
and assignment operator have strong exception guarantee (even if exceptions are
disabled! which translates to "roll back on allocation errors").
If xml_writer::write throws an exception while being called from flush(), the
exception is thrown from destructor. Clang in C++11 mode calls std::terminate
in this case.
Fix code style and revert redundant parameters/whitespace changes.
Also remove format_each_attribute_on_new_line - we're only introducing one
extra formatting flag. The flag implies format_indent but does not include its
bitmask.
Also add a few more tests.
Fixes#14.
End of an era.
Make can be used for regular development (Linux/OSX), documentation building
and release packaging.
CMake can be used for regular development (Windows); it's also used by some
Linux distributions.
Continuous integration is now performed by Travis CI and AppVeyor.
Ensure that all the necessary cleanup is performed in case the allocation fails
with an exception - files are closed, buffers are reclaimed, etc.
Any test that triggers a simulated out-of-memory condition is ran once again
with a throwing allocation function. Unobserved std::bad_alloc count as test
failures and require CHECK_ALLOC_FAIL macro.
Fixes#17.
Previously attributes that were copied with their node used string sharing,
but standalone attributes that were copied using xml_node::*_copy(xml_attribute)
were not.
If an out of memory error happens in load_file there's a danger of leaking
the FILE object. Since there is a limited supply of the objects we can easily
test that the leak does not happen.
as_utf8_end was used with std::string, where writing an extra zero-terminating
character should *probably* always work (at least if size is positive) but is
not ideal.
The only place that needed to zero-terminate was convert_path_heap.
Previously there was no guarantee that the tests that check for out of memory
handling behavior are actually correct - e.g. that they correctly simulate out
of memory conditions.
Now every simulated out of memory condition has to be "guarded" using
CHECK_ALLOC_FAIL. It makes sure that every piece of code that is supposed to
cause out-of-memory does so, and that no other code runs out of memory
unnoticed.
When parsing XPath variables, we need to perform a heap allocation; if it
fails, an xpath_exception instead of bad_alloc used to be thrown.
Now we throw the exception of a correct type so that xpath_exception means
'parsing error'.
This is mostly done using regex replaces of original Quickbook markup, plus a
bit of manual fixup for multiple references to the single point from different
lines that AsciiDoc does not seem to handle.