Currently when dealing with physical interactions with pressure plates
and tripwires we immediately block their activation as soon as a single
entity involved has their event cancelled. We also fire events whenever
an entity intersects the block a wooden button is in even if they aren't
actually pressing it. To correct this we move the button interaction to
the correct place and modify all three to only block the activation if
every entity is blocked from using them instead of just one of them.
CraftServer methods that implement the Server interface will throw an
IllegalArgumentException if a method cannot operate on a null input
and given a null pointer.
This causes methods to fail early and identify that a plugin is
responsible for passing in an invalid argument. This will only
change the exception thrown, if there originally was a thrown
exception. This helps with hunting down legitimate problems
with CraftBukkit.
If the server changes the weather it will set the per-player weather
variable and future changes will not apply. We should only set this
variable when a plugin is requesting per-player weather and not when
the server it doing it.
We used to fall Item.filterData() for this but that method is meant for
converting item data to block data during placement and does the wrong
thing for this case. Instead we just see if the item should have data and
if not set it to zero. We also have to filter wool data explicitly because
clients crash when given invalid wool data.
In Minecraft 1.5 saplings do not grow with a single use of bonemeal anymore.
Our code assumes they will and only takes away bonemeal from the player
when the tree grows successfully (not cancelled by a plugin). Instead we
now always remove a bonemeal even if a plugin is the reason a tree didn't
grow as this matches the vanilla logic more closely.
If a custom TravelAgent is used and returns null for findOrCreate method
a NullPointerException will occur.
Conflicts:
src/main/java/net/minecraft/server/PlayerList.java
Currently, CraftTravelAgent will call s() on the passed-in WorldServer in order to set DEFAULT. However, s() will always return null at this point, because WorldServer.P will still be null, as it is set after the constructor is called. Instead, we set CraftTravelAgent.DEFAULT to the instance that is being constructed.
Recent changes caused PlayerPortalEvent to suddenly return null
unexpectedly and could end up in NPEs resulting that did not before.
This commit addresses that situation by always ensuring a TravelAgent
instance is returned.
The TravelAgent for world 0 is returned arbitrarily in an effort to
compensate for plugins that are implementation dependent and expect some
form of a TravelAgent to be accessible in the event at all times.
Vanilla does not check for blocks in which the player could
suffocate when changing dimension, so portals will happily spawn
players in blocks when using a portal under certain
circumstances. However, we currently check for these instances
and move the player up until they will not suffocate. This means
that players can sometimes be taken to above the target portal,
making it seem as if a portal was not created. Instead, we now
disable this suffocation check when moveToWorld is called from
changeDimension, mirroring vanilla behavior more accurately.
Due to the having to generate new logic to avoid using the customized
PlayerConnection.moveToWorld, entities returning from The End were not
properly calculating their exit target. This commit corrects that
logic.
By having a single function to process BlockPlacement logic, we make
it so that there is consistent behavior throughout all BlockPlace
events. This should allow for easier troubleshooting and less diffs
in source.
This also fixes BUKKIT-3463 by including the correct coordinates that
were clicked to the event.
Also fixes: BUKKIT-3477 and BUKKIT-3488
Minecraft likes to double check that tile entities get set after they
are placed, however we didn't set tile entities until after our event
was called. This caused the world to have multiple tile entities in a
single block location; to fix this we now set tile entities before
the event.
When the skull BlockPlaceEvent was added it was made so the event
would be called after all the data has been set, however this is a
behavior change that is inconsistent with other BlockPlaceEvents.
Instead, if people wish to get the block data they should schedule
a task.
Relates to: BUKKIT-3438
When either of those settings are false, the worlds are not loaded and
therefore will not be targeted for portal exits. Existing worlds are
iterated directly to avoid defaulting to the first world if a direct
dimension match is not found.
Plugins must also specify exit from custom Bukkit worlds to comply with
original commit: https://github.com/Bukkit/CraftBukkit/commit/2dc2af0
This commit introduces a constant to clarify the dependency on the
CraftBukkit implementation of custom worlds having a dimension offset.
By returning the following value (7) we remove the need to special
case pistons in any way (other than the original purpose of this
check, which is to ensure pistons have valid data)
The previous logic was faulty since it lost the logic of "placing" the
block. It was also taking into account data that could have been
changed outside of the processing of this event, which is irrelevant
to the processing of this event.
The javadocs state that a null may be used to remove the currently
playing sound, however this causes a NullPointerException.
It also doesn't process registering the record correctly, along with
processing non-valid items.
By using return 0, we exit the loop prematurely preventing other
creature types from being spawned if one type is set to 0. By using
continue we move on to the other types and allow them to spawn
properly.
Fixes BUKKIT-3408, BUKKIT-3190, BUKKIT-3191, BUKKIT-3407
These changes relate mostly to semantical changes for serialization
contract, exception of changing the map scaling value from byte to boolean,
what it should have been in the first place. Appropriate unit tests were
added for CraftMapMeta, as they were missing.
This makes it so animals (tame or not) will sit properly and not move
around.
Wild animals that are sitting may override the sitting position if
they are attacking.
The 'tag' NBTTagCompound field of the ItemStack assumes that it is OK to
save a reference to an NBT supplied via load() and assumes it is OK to
supply a reference to the internal field during a save(). Neither is true,
as Chunk NBT structures are required to be read-only once created (due to
being written asynchronously off the server thread AND due to the potential
to be passed to a new Chunk if the same chunk is reloaded before the
writing of the NBT is completed by the File I/O thread). Keeping a live
reference to the NBT copy passed in, or to the NBT value passed back
during saving, creates serious thread safety issues which can result in
corrupted data being written to the world data files.
The specific issue here was uncovered by the recent change to use
setName("") on the ItemStack.tag object. When a chunk is being loaded
again before its save is completed, this results in name of the field
in the NBT being set to "". This causes it to be saved as "" instead
of "tag" resulting in it not being properly reloaded in the future which
results in the itemstack losing all of its metadata.
Teleportation should never be processed on dead entities. If you wish
to teleport an entity, do it on a living entity. If you wish to
teleport a player, set their respawn location in PlayerRespawnEvent.
This adds two settings to bukkit.yml, allowing activation and control of
two chunk garbage collection triggering conditions:
chunk-gc/period-in-ticks controls a periodic GC, run once every N ticks
(default is 600); chunk-gc/load-threshold causes the GC to run once
after every N calls to loadChunk() on a given world (this call is an API
call used by plugins, and is distinct from the path taken for routine
player movement-based loading). In both cases, setting to zero will
disable the given GC scheduling strategy.
In either case, the act of doing the GC is simply one of scanning the
loaded chunks, seeing which are NOT being used by one or more players
(due to view-distance) and which are not already queued for unload, and
queueing them for a normal unload. Ultimately, the unload is then
processed the same as if the chunk were unloaded due to leaving the
view-distance range of all players, so the impact on plugins should be
no different (and strategies such as handling the ChunkUnloadEvent in
order to prevent unload will still work).
The initial interval for the periodic GC is randomized on a per-world
basis, in order to avoid all world being GCed at the same time -
minimizing potential lag spikes.
With the persistence api introduced, pets did not have their
persistence flag updated to reflect their persistence. This caused
tame ocelots to not persist under specific conditions.
Slimes and wolves have health that can change based on certain
conditions. So we check if their max health should be updated, and if
it has been customized in any way.
We also scale the wolf's health for their tail
An ItemStack gains the tag name "tag" when the stack is serialized
to NBT, however items don't have a tag *until* they are serialized at
least once. So to solve this, we remove the tag name when loading the
NBT data.
Another problem with NBT are TagLists, when transferring tag lists
between the server and the client the names are lost, and so we
simply don't add a name to the tag.
If you cancel a BlockPlaceEvent for a sign the world is updated as if
the block was placed and then destroyed. To avoid this we set the block
without updating physics then apply the update after the event.
When unloading chunks we have a check to ensure we do not remove players
from the world due to the issues this would cause. However, our check
to see if the player is in this chunk is reversed and is in fact entirely
wrong. Even if the player isn't currently in this chunk we do not want
to remove them as that will still cause the same issues.
The key "direction" incorrectly mapped to variables that were already
set in the entity. In order to prevent loading incorrect data we
renamed "direction" to "power."
The player would have no permissions (other than their OP status)
when checked in the Quit event. This is because we removed permissions
before the event occurred. By calling it afterwards, we can persist
the data until the server finally removes the player.
In some situations, an async task could be cancelled with no tasks
pending. This means the finally {} block from run() never gets executed
properly on the last async task to have run, as it expected to be
executed again.
This fix takes the only spot that the task period is set to cancelled
and will check to see if the task should be purged from the runners
list.
Some meta functionality is refactored into common methods.
CraftItemStack uses the ItemMetaKey identifiers for enchantments.
Refactored unit test to include extra functionality; initially only
checking the presence of the DelegateDeserialization annotation.
The setTexturePack method causes the player's client to
download and switch to a texture pack specified by a URL.
Note: Players can disable server textures on their client, in which
case this API would not affect them.
With 1.4, entity sound tracking changed for the better.
Our previous method additions can now be removed.
All that's left is checking if the source can be seen
by the recipient of the sound packet. Thanks, Mojang!
The purpose of the isSimilar method was designed to consider all NBT
data, not solely enchantments, without the need to have exact stack
size matches. The respective methods in CraftInventory were still
comparing enchantments instead of the ItemMeta.
Changes some NPEs to IllegalArgumentExceptions for exception consistency.
Contains(ItemStack, int) correctly calculates number of ItemStacks.
Adds a containsAtLeast(ItemStack, int) for finding a combined amount of a
single similar ItemStack.
Makes some utility methods private to prevent ambiguity in use.
When a player triggers a chunk load via walking around or teleporting there
is no need to stop everything and get this chunk on the main thread. The
client is used to having to wait some time for this chunk and the server
doesn't immediately do anything with it except send it to the player. At
the same time chunk loading is the last major source of file IO that still
runs on the main thread.
These two facts make it possible to offload chunks loaded for this reason
to another thread. However, not all parts of chunk loading can happen off
the main thread. For this we use the new AsynchronousExecutor system to
split chunk loading in to three pieces. The first is loading data from
disk, decompressing it, and parsing it in to an NBT structure. The second
piece is creating entities and tile entities in the chunk and adding them
to the world, this is still done on the main thread. The third piece is
informing everyone who requested a chunk load that the load is finished.
For this we register callbacks and then run them on the main thread once
the previous two stages are finished.
There are still cases where a chunk is needed immediately and these will
still trigger chunk loading entirely on the main thread. The most obvious
case is plugins using the API to request a chunk load. We also must load
the chunk immediately when something in the world tries to access it. In
these cases we ignore any possibly pending or in progress chunk loading
that is happening asynchronously as we will have the chunk loaded by the
time they are finished.
The hope is that overall this system will result in less CPU time and
pauses due to blocking file IO on the main thread thus giving more
consistent performance. Testing so far has shown that this also speeds up
chunk loading client side although some of this is likely to be because
we are sending less chunks at once for the client to process.
Thanks for @ammaraskar for help with the implementation of this feature.
When a player has canPickUpLoot set to true the code for mob pickup is
triggerd which does not know how to deal with player inventory. Since
players have their own logic for picking up items we simply disable this
code for them.
The old flag for picking up loot was default to false, making existing players not able to pickup items. We now use this flag for Players, which gives us the problem we had in 48b46f83.
To fix this, we add an incremental flag that will be cross-examined to check if the data was saved before or after the flag level was introduced.
Addresses BUKKIT-3143
As an added feature, players defaulted to being able to not pick up items if the flag was false. However, since minecraft doesn't normally use the flag on players, the flag was always false.
Adds:
- Getting/Setting equipment
- getting/setting drop rates
- getting/setting ability to pick up items
-- As an added feature, players with this flag start off with a canceled PlayerPickupItemEvent
When a mob is marked with the persistent flag (animal or anything with
setRemoveWhenFarAway(false)) the entire block of code for checking if they
should be despawned is skipped. However, one part of this code updates the
mob state if a player is close enough to them. It turns out this state is
used by the AI system to decide if the mob should move around randomly or
not. To stop mobs from being frozen in place we now update this state if
the persistent flag is set as well.
Currently when a plugin wants to get the location of something it calls
getLocation() which returns a new Location object. In some scenarios this
can cause enough object creation/destruction churn to be a significant
overhead. For this cases we add a method that updates a provided Location
object so there is no object creation done. This allows well written code
to work on several locations with only a single Location object getting
created.
Providing a more efficient way to set a location was also looked at but
the current solution is the fastest we can provide. You are not required
to create a new Location object every time you want to set something's
location so, with proper design, you can set locations with only a single
Location object being created.
The old default for the persistent flag on mobs was false which was then
written out to their NBT data when they were saved. We now use this data
for all mobs, not just non-animal mobs. However, this means animals that
spawned before that change will now start despawning like monsters do.
To avoid this we add a new flag to the mob's saved data to mark if the
data was saved before or after we started using it and ignore it if it
was before.
As of 1.4 mobs have a flag to determine if they despawn when away from a
player or not. Unfortunately animals still use their own system to prevent
despawning instead of making use of this flag. This change modifies them
to use the new system (defaults to true) and to add API for plugins to adjust
this.
If the player is not in Creative (i.e. does not have the ability to
instantly build) we need to decrement the MonsterEgg item stack when used
on a breedable parent mob.
Stale player references will add a player back into the world when
teleporting them, causing a cascade of issues relating to ghost entities
and servers failing to stop.
On join we unconditionally add the player to the world they logged out in.
If a plugin teleports a player during PlayerJoinEvent in a way that adds
them to a world (cross-world teleport) we end up with one player in two
places. To avoid this we check to see if the player has changed worlds or
is already added to the world we have we skip adding them again.
This is a missed part of the original "[Bleeding] Use case from player data
for OfflinePlayer. Fixes BUKKIT-519" commit. It avoids doing (somewhat
expensive) lookups of player data to find the correct capitalization inside
getOfflinePlayers() as we're already loading their name from the player data
and thus have the correct capitalization.
When sending chunks to a player we use their writer thread to do chunk
compression to avoid blocking the main thread with this work. However,
after a teleport or respawn there are a large number of chunk packets to
process. This causes the thread to spend a long period handling compression
while we continue dumping more chunk packets on it to handle. The result of
this is a noticable delay in getting responses to commands and chat
immediately after teleporting.
Switching to a lower compression level reduces this load and makes our
behavior more like vanilla. We do, however, still give this thread more
work to do so there will likely still be some delay when comparing to
vanilla. The only way to avoid this would be to put chunk compression back
on the main thread and give everyone on the server a poorer experience
instead.
When an event changes the item to be dispensed we check to see if the new
item has special behavior for dispensing and if so pass it on to that
behavior handler. However, we are actually checking the old itemstack and
passing the new itemstack so this check fails.
If a plugin looks up a player that is offline they may not know the correct
capitalization for the name. In this case they're likely to get it wrong
and since we cache the result even after the player joins the server all
future request for an OfflinePlayer will return one with incorrect case.
When looking up a player who has played on the server before we can
get the correct case from the player data file saved by the server. If
the player has never played before this point we cannot do anything and
will still have the same issue but this is not a solvable problem.
If a player travels past 32,000,000 blocks on the X or Z coordinates they
will be kicked for having an illegal position. On kick their player data
is saved which includes their (illegal) position. This means on join they
are immediately kicked again for the same reason and are stuck. Instead of
kicking at all in this case just teleport the player back to their previous
position just like the moved wrongly check does.
In order to correctly handle disconnects for invalid chat we setup a
Waitable and pass it to the main thread then wait for it to be processed.
However, commands are also chat packets and they are already on the main
thread. In this case, waiting will deadlock the server so we should just
do a normal disconnect.
End portals can only be placed in the end during the dragon's death.
Attempts to place them outside of this window causes the block to remove
itself. However, we still create the tile entity for the portal which
leads to exceptions spamming the console about a tile entity existing
without the appropriate block. In these cases we should not place the tile
entity at all.
When invalid chat is detected we currently drop the connection with no
hint as to why as anything else is not allowed while we're off the main
thread. To give valid disconnect reasons and fire proper events instead
pass these off to the main thread and wait for it to process them.
If a plugin cancels a PlayerInteractEvent when left clicking a block the
client may have removed this block if they are in creative mode or if the
block breaks in a single hit. In this case, we need to update the client's
tile entity as well as telling it the block still exists.
Packet 51 is used to send updates about large changes to single chunks
and to remove chunks from the client when they get out of range. In the
first case a single packet object is created and queued for all relevant
players. With our current chunk compression scheme this means the first
player to have the packet processed will start the compression and get the
packet correctly but the rest will get garbage.
Since this packet never contains much data it is better to simply handle
compression of it on the main thread like vanilla does instead of putting in
locks and dealing with their overhead and complexity.
When a client tries to break a block it assumes it has done so unless told
otherwise by the server. This means the client also wipes out any tile
entity data it has for the block as well. We do not send this data when
updating the client so clients lose things like text on signs, skull type,
etc when they aren't allowed to break the block.
Skulls need their tile entity in order to create an item correctly when
broken unlike every other block. Instead of sprinkling special cases all
over the code just override dropNaturally for skulls to read from their
tile entity and make sure everything that wants to drop them calls this
method before removing the block. There is only one case where this wasn't
already true so we end up with much less special casing.
Sheep now use the crafting system when breeding to determine what color
their baby should be. This triggers an event but the event wants the
crafting inventory to have a result slot which sheep do not have. This
event could be useful for plugins to control the output of sheep breeding
so instead of disabling it we add a result slot so the event fires without
issue.
If a chunk gets a block added to it that requires the extended block id
nibble array (block id greater than 255) the array is created and saved
with the chunk. When the blocks are verified to make sure they exist these
entries are erased but the extended block id array is not. This causes the
server and client to disagree about how much data a chunk has which makes
the client crash while trying to load the chunk for rendering.
To resolve these issues we now clear the extended block id array on chunk
load if there is no valid data in it.
When a block creates a falling entity the block is not immediately removed
from the world. Instead, the falling entity is responsible for removing it
but only if the block still exists. Due to certain piston mechanics it is
possible to move the block before this check happens and thus the block is
not removed. This should be fine as the entity will kill itself in this
situation. However, the code does not stop here and continues running the
rest of the entity logic which includes either placing a block in the world
or placing a block item in the world depending on the circumstances.
If a block is air we return immediately so miss the cleanup work that would
normally happen in this case in vanilla. This causes us to get in to a
situation where, due to odd packet sending from the client, we never
properly stop an attempt by the client to break a block and thus it
eventually breaks.
We also use our own variable for block damage and never sync it up with the
vanilla one so damage reporting to other clients is not always correct.
The static assertions are not normally evaluated in the JVM, and failed
to fail when the enums went from size 25 to size 26. This meant missing
values would not be detected at runtime and instead return null,
compounding problems later. The switches should never evaluate to null
so will instead throw runtime assertion errors.
Additional unit tests were added to detect new paintings and assure they
have proper, unique mappings. The test checks both that a mapping
exists, is not null, and does not duplicate another mapping.
If a defensive copy is not used in the API, changes to the item are
reflected in memory, but never updated to the client. It also goes
against the general contract provided in Bukkit, where setItem should be
the only way to change the underlying item frame.
Skull blocks store their type in a tile entity and use their block data
as rotation. When breaking a block the block data is used for determining
what item to drop. Simply changing this to use the skull method for getting
their drop data is not enough because their tile entity is already gone.
Therefore we have to special case skulls to get the correct data _and_ get
that data before breaking the block.
On player death player PotionEffects need to be updated so that a player's
invisibility and other effects are removed, otherwise they will persist
after a respawn. This is a carry-over from our use of persistent player
entities.
Some features added in 1.4.2 use the difficulty value as an index to an
array so while before having it set to an invalid value would do nothing
or maybe cause an odd side effect somewhere it now crashes the server. This
patch ensures difficulty values are clamped between 0 and 3, inclusive.
Filtering item data is usually a good idea to make sure we don't have
invalid data or data on items that shouldn't have it. However, anvils
use item data in slightly different way and so running its code for
filtering here causes the data to be corrupted.
A couple method names were changed between 1.3.2 and 1.4.2 but were missed
in the update. One of these affects being able to enchant bows and the
other is used for updating player animations while firing.
Vanilla has its own handlers for plugin channel messages for things like
texture packs, books, and anvils. When vanilla handles one of these messages
we should not also pass it to plugins because they will be duplicating work
and potentially running in to situations our plugin system isn't setup to
handle. This is how 1.3.2 worked but was lost in the 1.4.2 update.
CommandMap now contains the functionality for tab completion. This
commit replaces the vanilla implementation and simply delegates it to
the Bukkit API.
This change affects the old chat compatibility layer from an
implementation only standpoint. It does not queue the 'event' to fire,
but rather queues a runnable that allows the calling thread to wait for
execution to finish.
The other effect of this change is that rcon connects now have their
commands queued to be run on next server tick using the same
implementation.
The internal implementation is in org.bukkit.craftbukkit.util.Waitable.
It is very similar to a Future<T> task, but only contains minimal
implementation with object.wait() and object.notify() calls
under the hood of waitable.get() and waitable.run().
PlayerPreLoginEvent now properly implements thread-safe event execution
by queuing the events similar to chat and rcon. This is still a poor way
albeit proper way to implement thread-safety; PlayerPreLoginEvent will
stay deprecated.
The implementation for the new methods mimics the old methods. The final
call for the old methods now maps to the new methods with an additional
call to get id.
If two players (or a player and any other entity) are teleported to the
same location in the same tick they will both get added to the other's
destroy queue then have a new entity spawn packet sent. Next tick the
destroy queue will be processed and they will then be invisible to each
other. To prevent this situation we remove the entity from the destroy
queue when sending out a spawn packet for them.
The new AI system introduced by Minecraft 1.2 no longer relies on the
target field in the entity so it is frequently out of sync with what the
entity is actually doing. This modifies the AI goal to update the target
so our API can return the correct information.
In 1.2.5 and older versions of CraftBukkit we allowed the use of data
values on bug mushroom and mob spawner blocks for use with plugins.
For the 1.3 update the mechanism for doing this was changed and I
accidentally used the wrong value when adding these, indicating that
they should not have data instead of our actual intent. This change
corrects this regression.
If a plugin calls player.hidePlayer(other); then player.showPlayer(other);
in the same tick the other player will be added to the entity destroy queue
then a spawn packet will be sent. On the next tick the queue will be
processed and a destroy packet will be sent that renders the other player
invisible. To correct this we ensure the destroy queue is in sync with use
of the vanish API.
In some situations an entity or tile entity can be added to the world but
have its own 'world' field be null or otherwise incorrect. As the entity
was added to this world to be ticked assume it actually is in this world.
An internal method for making the debug output for CraftScheduler's
async tasks was erroneously using the 'this' reference when the loop
should be referencing the current task.
This change was done to remove the internal sound names from the API.
Along with moving the internal names into CraftBukkit, a unit test was
added for any new sounds added in the API to assure they have a non-null
mapping.
After further testing it appears that while the original LongHashtable
has issues with object creation churn and is severly slower than even
java.util.HashMap in general case benchmarks it is in fact very efficient
for our use case.
With this in mind I wrote a replacement LongObjectHashMap modeled after
LongHashtable. Unlike the original implementation this one does not use
Entry objects for storage so does not have the same object creation churn.
It also uses a 2D array instead of a 3D one and does not use a cache as
benchmarking shows this is more efficient. The "bucket size" was chosen
based on benchmarking performance of the HashMap with contents that would
be plausible for a 200+ player server. This means it uses a little extra
memory for smaller servers but almost always uses less than the normal
java.util.HashMap.
To make up for the original LongHashtable being a poor choice for generic
datasets I added a mixer to the new implementation based on code from
MurmurHash. While this has no noticable effect positive or negative with
our normal use of chunk coordinates it makes the HashMap perform just as
well with nearly any kind of dataset.
After these changes ChunkProviderServer.isChunkLoaded() goes from using
20% CPU time while sampling to not even showing up after 45 minutes of
sampling due to the CPU usage being too low to be noticed.
This fix changes the 'state' of the last accessed variables to be more
accurate. Changing the coordinates of the last accessed chunk should
never precede actually setting the last accessed chunk, as loading a
chunk may at some point call back to getChunkAt with a new set of
coordinates before the chunk has actually been loaded. The coordinates
would have been set, but the actual chunk would not. With no check for
accuracy, this causes fringe case issues such as null block states.
Big thanks to @V10lator for finding where the root of the problem was
occurring.
This implementation of a visibility API check for sounds
was created by adding extra methods carrying the source entity
in WorldManager and ServerConfigurationManagerAbstract and
adding a test for canSee in the SCMA sendPacketNearby method.
This approach involves no logic copying, just method addition.
I opted to cast to WorldManager as:
1) IWorldAccess is not in CraftBukkit at the moment
2) There is no other IWorldAccess implemented in CraftBukkit,
nor is there likely to be one soon. If that day comes, easy fix.
The new setting is located at "ticks-per.autosave". By changing this
value, it affects how often a full save is automatically executed,
measured in ticks.
This value is defaulting to 0 (off) because we believe that the vast
majority of servers already have a third-party solution to automatically
saving the server at set intervals. Having the built in auto-save disabled
by default ensures that we are not saving things twice; doing so leads to
absolutely no benefits, but results in detrimental and noticeable
unnecessary performance decrease.
For servers that do not use an automated external script to perform saves,
this setting can be turned on by setting the value higher than 0, with 900
being the value used in vanilla.
Refactoring dependencies 'changes' the string literal in the code. This
commit changes the literal to instead use a char[] to initialize a new
String. On a bytecode level, there will not exist a String literal for these
two values; the shade plugin will no longer refactor them.
Refactoring jline also changes the other String literals we use for
notifying jline of the current state. To insure that our local code reflects
the inner logic in jline, the key value was changed to the static final
variable located in TerminalFactory. Likewise, UnsupportedTerminal uses the
explicit class name (as reflection is used later with the value that has
been set).
Async tasks are notorious for causing CMEs and corrupted data when
accessing the API. This change makes a linked list to track recent tasks
that may no longer be running. It is accessed via the toString method on
the scheduler. This behavior is not guaranteed, but it is accessible as
such currently.
Although toString is located in the scheduler, its contract does not
guarantee an accurate or up to date call when accessed from a second
thread.
When 1.3.1 was released, a try-catch block was removed from the tick
loop that called the method in NMS to handle commands. This restores a
try-catch to prevent the console from crashing the server.
Minecraft resets abilities based on what it knows client side, when someone dies and is in "survival," by default they should be in "survival." However, we allow modification of the PlayerAbilities, so we send this update out to the client.
Oh and, the format of the commit is like this to see if it looks any good. :)
Previously, the timeout would erroneously get converted to milliseconds
twice. The second conversion was removed.
Spurious wakeups were not handled properly, and would instead throw a
TimeoutException even if the waited time was not reached..
The new scheduler uses a non-blocking methodology. Combining volatile
references to make a linked reference chain, with the atomic reference
handling the tail, tasks are queued without waiting for locks. The main
thread will no longer limit the length of time spend for scheduled tasks,
but no task will run twice in the same tick. Scheduling a new task inside of
a synchronous task will always run the new task during the same tick,
assuming there is no supplied delay > 0.
Asynchronous tasks are now run using a thread pool. Any thread-local
implemenation should now account for threads being reused between
executions.
Race conditions were carefully examined and the order of logic is now very
important. Each task is placed in a secondary collection before removal from
primary collections. Thus, by reading tasks from the collections in the same
order they travel, it retains state-safety. This does make modifications
less responsive in some situations, as the task may be transitioning before
the modifier accesses it. This cost outweighs the requirement to synchronize
on the scheduler; previously any conflict would be first-come-first-serve,
with the main thread backing out arbitrarily.
Many codepaths only end up with one iterator being used at a time and
most of the rest only get up to two being used so using a static pool of
three is wasteful. This also allows us to efficiently handle cases that
exceed 3 iterators in use. Overall this dramatically increases the hit rate
and results in less iterators being created.
This ArrayList duplicates part of the functionality of the much more
efficient chunk map so can be removed as the map can be used in the few
places this was needed.
Replace uses of LongHashtable and LongHashset with new implementations.
Remove EntryBase, LongBaseHashtable, LongHashset, and LongHashtable as they
are no longer used.
LongObjectHashMap does not use Entry or EntryBase classes internally for
storage so has much lower object churn and greater performance. LongHashSet
is not as much of performance win for our use case but for general use is
up to seventeen times faster than the old implementation and is in fact
faster than alternatives from "high performance" java libraries. This is
being added so that if someone tries to use it in the future in a place
unrelated to its current use they don't accidentally end up with something
slower than the Java collections HashSet implementation.
Avoid overhead of using an ArrayList and resizing it. Also allows for reuse
of objects in the pool during the same tick by explicitly releasing them
back to the pool. This allows for much better cache performance as well
as reduced cache footprint.
Remove redundant ArrayList to avoid excessive object creation and CPU
overhead, the entries are added to the list then immediately iterated through
to run so just run them directly.
Swap order of some conditionals to perform the more efficient check first
as if it fails the list lookup will not be executed.
Remove profiling hooks including some rather expensive calls to getSimpleName.
ChunkSection.e() is called once per chunk section loaded and is quite
expensive (about 20% of CPU time for loading the chunk). This changes the
logic to add a fast path when extended block data is not being used and
reorganizes the loops for more optimal array traversal. Overall this saves
about 20-30% CPU time in this method.
- Hardcore requires a newly generated world
- You will be banned if you die in a hardcore world
- You will NOT be banned if you die in a non-vanilla world
- Your "heart container" will not change without logging back in. (Vanilla bug)
They can spawn any valid entities now. What is a "valid" entity? A "valid" entity is an EntityType with a non-null getName(). (for example: PRIMED_TNT, FALLING_BLOCK)