Before this commit, xhas_out() was returning true regardless. This
was correct before the ZMQ_ROUTER_MANDATORY flag as introduced.
However, ZMQ_POLLOUT.
With this commit, _if_ ZMQ_ROUTER_MANDATORY is set, xhas_out() will
return false if ALL peer's outgoing pipes are full.
There is an outstanding high-level design question:
If ZMQ_ROUTER_MANDATORY is set, and zmq_poll() waits for ZMQ_POLLOUT
events, zmq_poll() will immediately wake up if only 1 pipe has
room to send, regardless of the peer, creating a busy loop of
zmq_poll() wake-up, zmq_send() (EAGAIN). There is no way for
the application to selectively wait for ZMQ_POLLOUT for specific
peer(s), which seems somehow necessary in ZMQ_ROUTER_MANDATORY.
This discussion will be addressed in a separate issue.
Signed-off-by: Marc Sune <marc@voltanet.io>
Signed-off-by: Fredi Raspall <fredi@voltanet.io>
Solution: check if the connecting inproc socket has been closed
before trying to send the identity.
Otherwise the pipe will be in waiting_for_delimiter state causing
writes to fail and the connect to assert when the context is being
torn down and the pending inproc connects are resolved.
Add test case that covers this behaviour.
Solution: allow for '[' character when doing the basic sanity check
on the TCP endpoint.
Also add unit tests for both IPv4 and IPv6 source;dest format.
getifaddrs() can fail transiently with ECONNREFUSED on Linux.
This has been observed with Linux 3.10 when multiple processes
call zmq::tcp_address_t::resolve_nic_name() simultaneously.
Before asserting in this case, make 10 attempts, with exponential
backoff, given by (1 msec * 2^i), where i is the attempt number.
Fixes#2051
Solution: try to resolve the TCP endpoint passed by the user in the
zmq_unbind call before giving up, if it doesn't match.
This fixes a breakage in the API, where after a call to
zmq_bind(s, "tcp://127.0.0.1:9999") with IPv6 enabled on s would
result in the call to zmq_unbind(s, "tcp://127.0.0.1:9999") failing.
Add more test cases to increase coverage on all combinations of TCP
endpoints.
Solution: in the Windows-specific ifdef in tcp_listener set_address,
check for error and set errno only after the IPv4 fallback has failed
too, to avoid setting errno when the socket creation succeeds through
the fallback.
Solution: if opening an IPv6 TCP socket fails because IPv6 is not
available, try to open an IPv4 socket instead when creating and
connecting a TCP endpoint.
Solution: if opening an IPv6 TCP socket fails because IPv6 is not
available, try to open an IPv4 socket instead when creating and
binding a TCP endpoint.
While sending very large messages (far beyond what fits in a the tcp
buffer, so it takes multiple sendto system calls for it to finish),
zmq_close will close the connection regardless of ZMQ_LINGER.
In case no engine is attached, a pipe->check_read() is needed to look
for the delimiter in the pipe and ultimately trigger the pipe
termination.
However, if there *is* an engine attached, the check_read() looks ahead
and finds the delimiter and terminates the connection even though the
engine might actually still be in the middle of sending a message.
This happens because while the io_thread is still busy sending the data,
the pipe can get terminated and the io thread ends up being terminated.
Solution: always initialised zmq::options_t class variables arrays to
avoid reading uninitialised data when CURVE is not yet configured and
a getsockopt ZMQ_CURVE_{SERVER | PUBLIC | SECRET]KEY is issued.
Backport from libzmq.
See issue #1608.
This is an old issue with Windows 7. The effect is that we see a latency
ramp on the first 500 messages.
* The ramp is unaffected by message size.
* Sleeping up to 100msec between sends has no effect except to switch
off ZeroMQ batching so making the ramp more visible.
* After 500 messages, latency falls back down to ~10-40 usec.
* Over inproc:// the ramp happens when we use the signaler class.
* Client-server over inproc:// does not show the ramp.
* Client-server over tcp:// shows a similar ramp.
We know that the signaller is using TCP on Windows. We can 'prime' the
connection by doing 500 dummy sends. This potentially causes new sockets
to be delayed on creation, which is not a good solution.
Note that the signaller sends zero-byte messages. This may also be
confusing TCP.
Solution: flood the receive buffer when creating a new FD pair; send a
1M buffer and discard it.
Fixes#1608
There are two todo comments in curve_client.cpp and curve_server.cpp that suggest
checking the return code of sodium_init() call. sodium_init() returns -1 on error,
0 on success and 1 if it has been called before and is already initalized:
https://github.com/jedisct1/libsodium/blob/master/src/libsodium/sodium/core.c
Problem: asserts if EINVAL recieved on read/write
This causes assertion failures after network reconnects.
Solution: allow EINVAL as a possible condition after read/write.
Fixes#829Fixes#1399
Patch provided by Michele Dionisio @mdionisio, thanks :)