In the current ProtocolLib release (3.1.0) the getFavicon() method will
throw a NullPointerException if the server is not sending a favicon.
This was partly fixed in 3c5482f but there's still no way of checking if
the server is sending a favicon, without checking if the encoded output
of the CompressedImage.toEncodedText() will return a valid result.
This commit will make the favicon getter and setter in the server ping
correctly handle pings with no favicon, by returning null for
getFavicon() (if no favicon will be displayed) and allowing to hide the
favicon by setting the favicon to null.
Calling remove() in the main thread will block the main thread, which
may lead to a deadlock:
http://pastebin.com/L3SBVKzp
ProtocolLib executes this close() method through a PlayerQuitEvent in
the main thread, which has implicitly aquired a lock on
SimplePluginManager (see SimplePluginManager.callEvent(Event)).
Unfortunately, the remove() method will schedule the removal on one of
the Netty worker threads if it's called from a different thread,
blocking until the removal has been confirmed.
This is bad enough (Rule #1: Don't block the main thread), but the real
trouble starts if the same worker thread happens to be handling a server
ping connection when this removal task is scheduled. In that case, it
may attempt to invoke an asynchronous ServerPingEvent
(see PacketStatusListener) using SimplePluginManager.callEvent(). But,
since this has already been locked by the main thread, we end up with a
deadlock. The main thread is waiting for the worker thread to process
the task, and the worker thread is waiting for the main thread to
finish executing PlayerQuitEvent.
TLDR: Concurrenty is hard.
This allows us to provide additional information to our PacketEvent
(mostly OfflinePlayer information), without sacrificing sendMessage()
or getAddress(), which doesn't work in CraftPlayer during login.
The vanilla server bootstrap (ServerConnectionChannel) is executed
asynchronously when a new channel object has been registered in an event
loop, much before it is ready to accept a new client connection. It is
here the whole channel handler pipeline is set up, along with a
NetworkManager responsible for reading and writing packets.
The trouble starts when the bootstrap class adds the created
NetworkManager to a list (f) of managers in ServerConnection. This list
is regularly inspected by the main thread (in order to process packets
on the main thread) and includes a clean up procedure
(ServerConnection#61) in case it detects a disconnected network manager.
Unfortunately, the network manager IS considered disconnected for a
moment when its added to the list, so the main thread MAY end up
getting to the network manager before Netty has connected the channel.
This is still very rare under normal circumstances, but because
ProtocolLib does a lot of initial processing in the channel handler, the
asynchronous thread gets hold up for a long while the first time a
player connects to the server, allowing the main thread sufficient time
to catch up and evict the network manager.
The partial solution here is to synchronize on the network manager list,
stopping the main thread from processing network managers when we are
preparing our new connection.
But I suspect the best solution would be to correct the root of the
problem, and correct the race condition in the server itself. This
could be done by only adding the network manager when the channel is
active
(see ChannelInboundHandler.channelActive).