A special-case occurs when a plugin sends a packet to a client
with filters set to FALSE (that is, bypassing most packet listeners) -
a new packet event is constructed solely for all MONITOR listeners, as
they are informed regardless of the value of FILTER.
Unfortunately, the sending method may be invoked on a thread other
than the main thread, which will invoke onPacketSending()
asynchronously. This violate the assumed thread affinity of
onPacketSending(), so we will now schedule the packet sending on
the main thread to correct this - but only if there are monitor
listeners, and they have not specified ListenerOptions.ASYNC (which
means onPacketSending() is thread safe).
We now accept any string in this constructor, to preserve
backwards compatibility. But, we depreciate its use, as
WrappedGameProfile(UUID, String) can be used in every Minecraft
version that supports a game profile.
There's also a new warning system that will identify the plugin
that is using the depreciated method, and print its name to the
console (at most once every hour).
It is possible, though not confirmed, that ProtocolLib has not been
fully cleaned up after a "reload" command and the next instance of
ProtocolLib is loaded. In that case, it may be possible that a channel
is injected in the main thread while its cleanup procedure is still
running.
This is an attempt to solve this problem. Though, it is not confirmed
to work.
Currently, the ClassSource returned by ClassSource.fromMap will return null if the Class cannot be found (as that is the behavior of maps). However, other ClassSources throw a ClassNotFoundException if the class cannot be loaded. This commit changes the behavior of ClassSource.fromMap to throw a ClassNotFoundException if the class was not found in the map (or was mapped to null). This commit also changes the method to interpret a null map as an empty map.
This was caused by the fact that "requireInputBuffer" used
findLegacy(int) to get the correct PacketType, instead of
findLegacy(int, Sender.CLIENT). The latter is justified by the fact
that only client-side packets require an input buffer.
The messages incorrectly identified the minecraft server package as "net.mineraft.server"
This commit fixes that
Although it is only a minor spelling error, it is worth fixing
This instructs ProtocolLib to fetch the original state of the packet,
before its processed by any packet listeners above LOWEST. Then,
it displays this state in the console, along with the final state as
retrieved in MONITOR.
NetworkMarker now contains a list of post listeners that are invoked
(in no particular order) when a packet has been serialize and sent
to a player, OR, when it has been enqueued for processing by the
server.
This works for both 1.7.2+ (Netty) and 1.6.4 and earlier, though the
1.6.4 version has a good deal more overhead.
If we update the number, we have to also use the "empty file" trick
to remove the old file, which will crash ProtocolLib on the first
reload. It takes a second reload for it to function at all.
It's much better to take the hit on the version number, and avoid
this issue altogether. The update method simply wasn't designed for
plugins with version numbers in their file name.
This was necessitated by two new NMS changes:
* NBTCompressedStreamTools.a(DataInput, int) now includes an additional
parameter NBTReadLimiter
* GameProfile changed the type of getId() from String to UUID, along
with the constructor (String, String) to (UUID, String).
We don't want to crash plugins over this, since it doesn't
automatically cause problems. But it may trip up plugins that
assume the packet types they set when registering a listener is the
only ones they'll ever recieve in the method body, which is not true
if a preceeding packet listener can change a packet to an arbitrary
type.
I'm open for better suggestions here. But for now, I'll just print a
warning and hope people use sendServerPacket() instead.
In 659f01cc63, I attempted to
execute packet listeners for receiveClientPacket() on the channel
thread, inadvertently causing them to be executed regardless if
filtered was FALSE, and twice if it is TRUE.
Since asynchronous packet listeners use this feature to take out
packets from the packet stream, they wound up causing an infinite
packet loop. This prevented them from ever being received by the
server.
An example of a dual side packet is Packet101CloseWindow, which is
sent by the server when it forces a inventory window to close, or
by the client to the server when the player voluntarily closes it.
The bug prevented the client-side of a dual side packet from being
recognized in 1.6.4. Thanks to Shevchikden for discovering the bug,
and finding the correct build number where it was introduced.
It's another lazy initialization problem. I only check a single
field before initializing two related fields, which can cause
problems when two threads execute handleLogin() concurrently.
If thread A detects that PACKET_LOGIN_CLIENT is null, it updates both
it and LOGIN_GAME_PROFILE. However, thread B may only see the
PACKET_LOGIN_CLIENT update, and still believe LOGIN_GAME_PROFILE is
NULL. Hence why it causes a NullPointerException in issue #39.
Constructing a WrappedWatchableObject with a
net.minecraft.server.ItemStack would cause ProtocolLib to throw an
IllegalArgumentException, even though WrappedWatchableObject.
setValue(Object) accepts ItemStacks without trouble.
This is because WrappedWatchableObject.getUnwrappedType() didn't handle
ItemStacks at all.
At the moment the setPlayers() method will throw a NullPointerException
if the player count has been hidden. This will correctly reset the
player counts before setting the player list and return null in the
getter instead of throwing an exception.
In the current ProtocolLib release (3.1.0) the getFavicon() method will
throw a NullPointerException if the server is not sending a favicon.
This was partly fixed in 3c5482f but there's still no way of checking if
the server is sending a favicon, without checking if the encoded output
of the CompressedImage.toEncodedText() will return a valid result.
This commit will make the favicon getter and setter in the server ping
correctly handle pings with no favicon, by returning null for
getFavicon() (if no favicon will be displayed) and allowing to hide the
favicon by setting the favicon to null.
Calling remove() in the main thread will block the main thread, which
may lead to a deadlock:
http://pastebin.com/L3SBVKzp
ProtocolLib executes this close() method through a PlayerQuitEvent in
the main thread, which has implicitly aquired a lock on
SimplePluginManager (see SimplePluginManager.callEvent(Event)).
Unfortunately, the remove() method will schedule the removal on one of
the Netty worker threads if it's called from a different thread,
blocking until the removal has been confirmed.
This is bad enough (Rule #1: Don't block the main thread), but the real
trouble starts if the same worker thread happens to be handling a server
ping connection when this removal task is scheduled. In that case, it
may attempt to invoke an asynchronous ServerPingEvent
(see PacketStatusListener) using SimplePluginManager.callEvent(). But,
since this has already been locked by the main thread, we end up with a
deadlock. The main thread is waiting for the worker thread to process
the task, and the worker thread is waiting for the main thread to
finish executing PlayerQuitEvent.
TLDR: Concurrenty is hard.
This allows us to provide additional information to our PacketEvent
(mostly OfflinePlayer information), without sacrificing sendMessage()
or getAddress(), which doesn't work in CraftPlayer during login.
The vanilla server bootstrap (ServerConnectionChannel) is executed
asynchronously when a new channel object has been registered in an event
loop, much before it is ready to accept a new client connection. It is
here the whole channel handler pipeline is set up, along with a
NetworkManager responsible for reading and writing packets.
The trouble starts when the bootstrap class adds the created
NetworkManager to a list (f) of managers in ServerConnection. This list
is regularly inspected by the main thread (in order to process packets
on the main thread) and includes a clean up procedure
(ServerConnection#61) in case it detects a disconnected network manager.
Unfortunately, the network manager IS considered disconnected for a
moment when its added to the list, so the main thread MAY end up
getting to the network manager before Netty has connected the channel.
This is still very rare under normal circumstances, but because
ProtocolLib does a lot of initial processing in the channel handler, the
asynchronous thread gets hold up for a long while the first time a
player connects to the server, allowing the main thread sufficient time
to catch up and evict the network manager.
The partial solution here is to synchronize on the network manager list,
stopping the main thread from processing network managers when we are
preparing our new connection.
But I suspect the best solution would be to correct the root of the
problem, and correct the race condition in the server itself. This
could be done by only adding the network manager when the channel is
active
(see ChannelInboundHandler.channelActive).
Now onPacketSending() should only be executed on the main thread,
unless the listener has specified ListenerOption.ASYNC. This option
is off by default, however.
This ensures that older plugins that assume onPacketSending() doesn't
crash the server. They SHOULD use the ASYNC option if possible, as
this explicit synchronization will slow down the STATUS
protocol somewhat.