avoids leaking sockets due to multiple monitor calls on one socket
Alternative: raise error (not sure what errno; EADDRINUSE?) if collision detected; force manual stop.
When using ZMQ_REQ_RELAXED and a 'send' is executed after another 'send' the
previous code would terminate the 'reply_pipe' if any.
This is incorrect as terminating the reply pipe also terminates the send pipe
as they are the same (a pipe associated with a socket is bidirectional).
Doing a terminate on the pipe sets an internal flag called out_active to false
and the pipe can no longer send messages.
Removing the 'terminate' solves the problem. Removing this call is not an issue
as the incorrect ordering of messages that could be incurred is taken care of
by the ZMQ_REQ_CORRELATE option if needed.
These sockets don't handle multipart data, so if callers send it,
they drop frames, and things break silently.
Solution: if the caller tries to use ZMQ_SNDMORE, return -1 and
set errno to EINVAL.
If we're going to add CLASS-like APIs we should use the proper
syntax; specifically 'destroy' instead of 'close', which is a
hangover from the 'ZeroMQ is like sockets' model we're slowly
moving away from.
Solution: change zmq_timers_close(p) to zmq_timers_destroy(&p)
VMCI transport allows fast communication between the Host
and a virtual machine, between virtual machines on the same host,
and within a virtual machine (like IPC).
It requires VMware to be installed on the host and Guest Additions
to be installed on a guest.
This reduces chances of race between writer deactivation and activation.
Reader sends activation command to writer when number or messages is
multiple of LWM. In situation with high throughput (millions of messages
per second) and correspondingly large HWM (e.g. 10M) the difference
between HWM needs to be large enough - so that activation command is
received before pipe becomes full.
Only assert on errors we know are our fault,
instead of trying to whitelist every possible network-related failure.
This makes ZeroMQ more portable to other platforms
where the possible errors are different.
In particular, the previous code would often die under iOS.
See issue #1608.
This is an old issue with Windows 7. The effect is that we see a latency
ramp on the first 500 messages.
* The ramp is unaffected by message size.
* Sleeping up to 100msec between sends has no effect except to switch
off ZeroMQ batching so making the ramp more visible.
* After 500 messages, latency falls back down to ~10-40 usec.
* Over inproc:// the ramp happens when we use the signaler class.
* Client-server over inproc:// does not show the ramp.
* Client-server over tcp:// shows a similar ramp.
We know that the signaller is using TCP on Windows. We can 'prime' the
connection by doing 500 dummy sends. This potentially causes new sockets
to be delayed on creation, which is not a good solution.
Note that the signaller sends zero-byte messages. This may also be
confusing TCP.
Solution: flood the receive buffer when creating a new FD pair; send a
1M buffer and discard it.
Fixes#1608
This causes assertion failures after network reconnects.
Solution: allow EINVAL as a possible condition after read/write.
Fixes#829Fixes#1399
Patch provided by Michele Dionisio @mdionisio, thanks :)
In real world usage, there have been reported signaler failures where the
eventfd read() or socket recv() system call in signaler::recv() fails,
despite having made a prior successful signaler::wait() call.
this patch creates a signaler::recv_failable() method that allows
unreadable eventfd / socket to return an error without asserting.
These tests connected CLIENT and SERVER to DEALER... this isn't
allowed. I changed to CLIENT-to-SERVER in both cases. The result
was aborts in client.cpp and server.cpp which cannot handle
invalid multipart data.
I removed the asserts in each of these in xsend.
Solution: fix the test cases and remove the (unwanted?) asserts
in client.cpp:xsend and server.cpp:xsend.
Tests were failing, because some deque calls were causing undefined
behavior: calling front() or pop_front() on an empty deque. Such
calls are now safeguarded.