It's unclear which we need and in the source code, conditional code
treats tweetnacl as a subclass of libsodium, which is inaccurate.
Solution: redesign the configure/cmake API for this:
* tweetnacl is present by default and cannot be enabled
* libsodium can be enabled using --with-libsodium, which replaces
the built-in tweetnacl
* CURVE encryption can be disabled entirely using --enable-curve=no
The macros we define in platform.hpp are:
ZMQ_HAVE_CURVE 1 // When CURVE is enabled
HAVE_LIBSODIUM 1 // When we are using libsodium
HAVE_TWEETNACL 1 // When we're using tweetnacl (default)
As of this patch, the default build of libzmq always has CURVE
security, and always uses tweetnacl.
And I'm on a reasonably sized laptop. I think allocating INT_MAX
memory is dangerous in a test case.
Solution: expose this as a context option. I've used ZMQ_MAX_MSGSZ
and documented it and implemented the API. However I don't know how
to get the parent context for a socket, so the code in zmq.cpp is
still unfinished.
These options are confusing and redundant. Their names suggest
they apply to the tcp:// transport, yet they are used for all
stream protocols. The methods zmq::set_tcp_receive_buffer and
zmq::set_tcp_send_buffer don't use these values at all, they use
ZMQ_SNDBUF and ZMQ_RCVBUF.
Solution: merge these new options into ZMQ_SNDBUF and ZMQ_RCVBUF.
This means defaulting these two options to 8192, and removing the
new options. We now have ZMQ_SNDBUF and ZMQ_RCVBUF being used both
for TCP socket control, and for input/output buffering.
Note: the default for SNDBUF and RCVBUF are otherwise 4096.
This option has a few issues. The name is long and clumsy. The
functonality is not smooth: one must set both this and
ZMQ_XPUB_VERBOSE at the same time, or things will break mysteriously.
Solution: rename to ZMQ_XPUB_VERBOSER and make an atomic option.
That is, implicitly does ZMQ_XPUB_VERBOSE.
Solution: be more explicit in the code, and in the zmq_recv man
page (which is the most unobvious case). Assert if length is not
zero and buffer is nonetheless null.
There's no value in this as the same pattern is repeated in several
places and it's fair to expect people to understand it.
Solution: revert to the old, one-liner style.
used static_cast<signed int> around WSA_WAIT_FAILED as it is an unsigned implicitly defined as (0xFFFFFFFF ion winbase.h) and causes a comparison error.
removed use of c++11 style initialiser list for 'sockaddr addr { 0 }' and changed it to 'sockaddr addr = { 0 }'
Solution: when iterating over a map and conditionally deleting
elements, an erased iterator gets invalidated. Call erase using postfix
increment on iterator to avoid using an invalid element in the next
iteration.
Solution: parse the value set by the ZMQ_PRE_ALLOCATED_FD sockopt
when creating a new TCP socket and use it if valid.
Add new tests/test_pre_allocated_fd_tcp.cpp unit test.
Solution: parse the value set by the ZMQ_PRE_ALLOCATED_FD sockopt
when creating a new IPC socket and use it if valid.
Add new tests/test_pre_allocated_fd_ipc.cpp unit test.
Solution: add new [set|get]sockopt ZMQ_PRE_ALLOCATED_FD to allow
users to let ZMQ use a pre-allocated file descriptor instead of
allocating a new one. Update [set|get]sockopt documentation and
test accordingly.
The main use case for this feature is a socket-activated systemd
service. For more information about this feature see:
http://0pointer.de/blog/projects/socket-activation.html
select was improved to support multiple service providers on Windows.
it should be slightly faster because of optimized iteration
over selected sockets.
This commit addresses the following warnings reported on gcc 5.2.1. In
the future, this will help reduce the "noise" and help catch warnings
revealing a serious problem.
It was originally introduce in the refactoring associated with
zeromq/libzmq@da2bc60 (Removing zmq_pollfd as it is replaced by zmq_poller).
8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---
/path/to/libzmq/src/socket_poller.cpp: In member function ‘int zmq::socket_poller_t::add(zmq::socket_base_t*, void*, short int)’:
/path/to/libzmq/src/socket_poller.cpp:92:51: warning: missing initializer for member ‘zmq::socket_poller_t::item_t::pollfd_index’ [-Wmissing-field-initializers]
item_t item = {socket_, 0, user_data_, events_};
^
/path/to/libzmq/src/socket_poller.cpp: In member function ‘int zmq::socket_poller_t::add_fd(zmq::fd_t, void*, short int)’:
/path/to/libzmq/src/socket_poller.cpp:108:50: warning: missing initializer for member ‘zmq::socket_poller_t::item_t::pollfd_index’ [-Wmissing-field-initializers]
item_t item = {NULL, fd_, user_data_, events_};
^
8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---
avoids leaking sockets due to multiple monitor calls on one socket
Alternative: raise error (not sure what errno; EADDRINUSE?) if collision detected; force manual stop.
When using ZMQ_REQ_RELAXED and a 'send' is executed after another 'send' the
previous code would terminate the 'reply_pipe' if any.
This is incorrect as terminating the reply pipe also terminates the send pipe
as they are the same (a pipe associated with a socket is bidirectional).
Doing a terminate on the pipe sets an internal flag called out_active to false
and the pipe can no longer send messages.
Removing the 'terminate' solves the problem. Removing this call is not an issue
as the incorrect ordering of messages that could be incurred is taken care of
by the ZMQ_REQ_CORRELATE option if needed.
These sockets don't handle multipart data, so if callers send it,
they drop frames, and things break silently.
Solution: if the caller tries to use ZMQ_SNDMORE, return -1 and
set errno to EINVAL.
If we're going to add CLASS-like APIs we should use the proper
syntax; specifically 'destroy' instead of 'close', which is a
hangover from the 'ZeroMQ is like sockets' model we're slowly
moving away from.
Solution: change zmq_timers_close(p) to zmq_timers_destroy(&p)
VMCI transport allows fast communication between the Host
and a virtual machine, between virtual machines on the same host,
and within a virtual machine (like IPC).
It requires VMware to be installed on the host and Guest Additions
to be installed on a guest.
This reduces chances of race between writer deactivation and activation.
Reader sends activation command to writer when number or messages is
multiple of LWM. In situation with high throughput (millions of messages
per second) and correspondingly large HWM (e.g. 10M) the difference
between HWM needs to be large enough - so that activation command is
received before pipe becomes full.
Only assert on errors we know are our fault,
instead of trying to whitelist every possible network-related failure.
This makes ZeroMQ more portable to other platforms
where the possible errors are different.
In particular, the previous code would often die under iOS.
See issue #1608.
This is an old issue with Windows 7. The effect is that we see a latency
ramp on the first 500 messages.
* The ramp is unaffected by message size.
* Sleeping up to 100msec between sends has no effect except to switch
off ZeroMQ batching so making the ramp more visible.
* After 500 messages, latency falls back down to ~10-40 usec.
* Over inproc:// the ramp happens when we use the signaler class.
* Client-server over inproc:// does not show the ramp.
* Client-server over tcp:// shows a similar ramp.
We know that the signaller is using TCP on Windows. We can 'prime' the
connection by doing 500 dummy sends. This potentially causes new sockets
to be delayed on creation, which is not a good solution.
Note that the signaller sends zero-byte messages. This may also be
confusing TCP.
Solution: flood the receive buffer when creating a new FD pair; send a
1M buffer and discard it.
Fixes#1608