Solution: add -lssp on Solaris only when libsodium is enabled and has
been found. Also disable pedantic and Werror, as libsodium headers
use pragma diagnostic which are not available in gcc 3.4.
Solution: in the Windows-specific ifdef in tcp_listener set_address,
check for error and set errno only after the IPv4 fallback has failed
too, to avoid setting errno when the socket creation succeeds through
the fallback.
Solution: if opening an IPv6 TCP socket fails because IPv6 is not
available, try to open an IPv4 socket instead when creating and
connecting a TCP endpoint.
Solution: if opening an IPv6 TCP socket fails because IPv6 is not
available, try to open an IPv4 socket instead when creating and
binding a TCP endpoint.
While sending very large messages (far beyond what fits in a the tcp
buffer, so it takes multiple sendto system calls for it to finish),
zmq_close will close the connection regardless of ZMQ_LINGER.
In case no engine is attached, a pipe->check_read() is needed to look
for the delimiter in the pipe and ultimately trigger the pipe
termination.
However, if there *is* an engine attached, the check_read() looks ahead
and finds the delimiter and terminates the connection even though the
engine might actually still be in the middle of sending a message.
This happens because while the io_thread is still busy sending the data,
the pipe can get terminated and the io thread ends up being terminated.
Solution: always initialised zmq::options_t class variables arrays to
avoid reading uninitialised data when CURVE is not yet configured and
a getsockopt ZMQ_CURVE_{SERVER | PUBLIC | SECRET]KEY is issued.
Backport from libzmq.
See issue #1608.
This is an old issue with Windows 7. The effect is that we see a latency
ramp on the first 500 messages.
* The ramp is unaffected by message size.
* Sleeping up to 100msec between sends has no effect except to switch
off ZeroMQ batching so making the ramp more visible.
* After 500 messages, latency falls back down to ~10-40 usec.
* Over inproc:// the ramp happens when we use the signaler class.
* Client-server over inproc:// does not show the ramp.
* Client-server over tcp:// shows a similar ramp.
We know that the signaller is using TCP on Windows. We can 'prime' the
connection by doing 500 dummy sends. This potentially causes new sockets
to be delayed on creation, which is not a good solution.
Note that the signaller sends zero-byte messages. This may also be
confusing TCP.
Solution: flood the receive buffer when creating a new FD pair; send a
1M buffer and discard it.
Fixes#1608