Dear Stefan,
Could my observation be correct? Can this situation occur?
The short answer here is yes, if the PacketBuffer configuration is insufficient and/or does not match the configuration of the SocketBuffer.
By default an UDP SocketBuffer is configured to hold up to 2048 bytes. This can be changed for a socket via setsockopt(SO_RCVBUF).
The PacketBuffer configuration of the system should be prepared to handle the situation where the complete size of the SocketBuffer
consumes PacketBuffers, for of ALL sockets that can be open at the same time.
For UDP this is hard to calculate as each UDP message consumes one PacketBuffer. This means with 16 PacketBuffers in the System it
would be possible to consume all PacketBuffers by 16 incoming 1 byte UDP messages.
For TCP things work different. As TCP is a stream, new incoming data might get merged into existing packet buffers.
With embOS/IP V3.04 we have added checks that prevent most of the deadlock situations that might occur due to small PacketBuffer
configurations. However for UDP this is still tricky and hard to avoid.
Is there a way to deal with it?
As a work-around, I configured the send to be non-blocking and skip sending out this single message.
This would have been our first suggestion as well. The stack itself is in "blocking state" as no more data can be received (one packet is kept free on Rx, to ensure sending a TCP retransmit or ACK is still possible). In your case I assume the UDP send() call is blocking as the only free PacketBuffer is not big enough for your data to send.
Can I limit number of network packets consumed by incoming UDP (multicasts) packets to ensure packets are available for send-calls?
I will check if we can add this for the next version. As explained above, a good PacketBuffer configuration for UDP is hard to estimate.
Thank you for the suggestion.
As soon as I have something ready for testing I will let you know by mail.
Regards,
Oliver