Rewrites the Threaded task logic to no longer use 2 queues and instead
keep a single prioritized queue and do all of a chunks light tasks in a single batch
Fix a math issue in one place (Thankfully didn't seem to really be a common place since didn't notice anything)
This patch fixes a bug in the WorldChunkManagerTheEnd class where the distance
from 0,0 squared overflows the maximum size of an integer. The overflow leads
to hard chunk borders around 370,000 blocks from 0,0. After this cutoff there
is a few hundred thousand block gap before end land resuming to generate at
530,000 blocks from spawn. This is due to the integer flipping back and forth.
The fix for the issue is quite simple, casting chunk coordinates to longs
allows the distance calculation to avoid overflow and work as intended.
We had 2 issues.
1) Log4J2 Shutdown hook seemed to be causing issues as it shutdown logger while we still needed it
2) ServerShutdownThread needs to stay alive until server is shutdown to keep jvm open.
It appears SIGINT is handled differently than SIGTERM, as SIGINT worked correctly.
But this will make both methods work.
Fix bug where mojang has a -90 modifier in yaw resulting in us calculating
chunks to the players left rather than in front of them.
Drastically improve Frustum Prioritization function to reduce lag from its
calculations (Found it was being spammed really heavy on world add/teleport)
Also improved the logic behind choosing chunks to prioritize.
Add Priority tickets to a radius of 3 on any login, world chnge or teleport
This should help improve world load / chunk sending upon a player changing
locations by loading those chunks faster.
Improved the Player Ticket Delayer to be a little bit smarter about delays to
let closer chunks load a bit faster and only delay the farther out ones more.
This update will provide significant improvements to priority of chunks and
reduce the cpu cost of doing these calculations.
Fixes#3530
There is some vanilla level bug where this tracking state appears
to get messed up and player doesn't exists in chunk its trying to untrack.
We returned early to prevent crashing, but I suspect if there was a level being
tracked for the chunk, it got leaked due to the early return.
So going to ensure we clean up the level tracker when this state occurs.
This may help with any leaked chunk issues.
Now supports async chunk access even though doing that is bad
and shouldn't be done anyways since we force you back to main, itll
now just delay the ticket add to main the same way.
Now only add the ticket if the plugin CAUSED the chunk load, so no longer
adds ticket if the chunk was already loaded.
Additionally, cap chunk ticket limits to 1 second (Effectively ignoring chunk-gc config
unless the config is lower than 20 ticks)
Fixes#3533
Obfuscate multiple chunks at a time over the server thread pool.
Will speed up chunk processing when anti xray is enabled.
Co-authored-by: Aikar <aikar@aikar.co>
Like previous versions, plugins loading chunks kept them loaded until
they garbage collected to avoid constant spamming of chunk loads
This adds tickets to a few more places so that they can be unloaded.
Additionally, this drops their ticket level to BORDER so they wont be ticking
so they will just sit inactive instead.
Using .loadChunk to keep a chunk ticking was a horrible idea for upstream
when we have TWO methods that are able to do that already in the API.
Not adding it to .getType() though to keep behavior consistent with vanilla.
In previous MC versions, we had a rather simple internal scheduler
for delayed tasks that would just keep pushing task back until desired
tick was reached.
The method it called to schedule the task changed behavior in 1.14, and now
this scheduler is not working nowhere near what it was supposed to be doing.
This was causing long delayed task to eat up CPU (In Oversleep for example)
Rewrite this to just use the CraftScheduler for scheduling delayed tasks.
Once this was fixed, it became quite clear the code that delayed ticket
additions for chunks based on distance was clearly not right, as it was
tested on the previous broken logic.
So the ticket delay process has been vastly revamped to be even smarter.
Chunks behind the player can load slower than the chunks in front of the player.
We also can delay ticket adding until one of its neighbors has loaded, as
this lets us get a smoother spiral out for the chunks (minus frustum intent).
Additionally on frustum previous commit inadvertently broke frustum trying to
fix an issue when the real fix lied elsewhere, so restore chunk priority so
it works again.
When players are moving in the world, doing things such as building or exploring,
they will commonly go back and forth in a small area. This causes a ton of chunk load
and unload activity on the edge chunks of their view distance.
A simple back and forth movement in 6 blocks could spam a chunk to thrash a
loading and unload cycle over and over again.
This is very wasteful. This system introduces a delay of inactivity on a chunk
before it actually unloads, which will be handled by the ticket expiry process.
This allows servers with smaller worlds who do less long distance exploring to stop
wasting cpu cycles on saving/unloading/reloading chunks repeatedly.
Upon further knowledge of the system, it is known that region files
are closing properly, as well as this didn't help native memory use anyways.
This patch also caused issues compiling on a newer JDK being able to
release the jar to java 8 users.
priority tickets being added at 33 was hurting sync EMPTY and lesser requests.
this was likely the source of recent treasure map issues.
This then further hurt nether portal travel too. lots of oddness around.
This also avoids scheduling a level change on ticket removal when the level
is unchanged, as well as ditches CB's horrible change to not letting
you access an unloading chunk which should be valid to cancel the unload
I'm going make a class, and in that class i'm going to
make a method. And in that method, I'm going to make a local class.
And then in that local class, I'm going to make another inner class.
I heard you like complex class trees.
Massive update to light to improve performance and chunk loading/generation.
1) Massive bit packing/unpacking optimizations and inlining.
A lot of performance has to do with constant packing and unpacking of bits.
We now inline a most bit operations, and re-use base x/y/z bits in many places.
This helps with cpu level processing to just do all the math at once instead
of having to jump in and out of function calls.
This much logic also is likely over the JVM Inline limit for JIT too.
2) Applied a few of JellySquid's Phosphor mod optimizations such as
- ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified
- reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used.
3) Optimize hot path accesses to getting updating chunk to have less branching
4) Optimize getBlock accesses to have less branching, and less unpacking
5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks.
6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full
of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front.
this applies for both urgent and non urgent tasks.
7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency.
8) Fix NPE risk that crashes server in getting nibble data
Fixes#3489Fixes#3363
Previously maps would load all chunks in a certain radius depending on
their scale when trying to update their content. This would result in
main thread chunk loads when they weren't really necessary, especially
on low view distances or "slow" async chunk loads after teleports or
other prioritisation.
This changes it to only try to render already loaded chunks based on
the assumption that the chunks around the player will get loaded
eventually anyways.
In rare cases, this class could potentially be loaded from
the chunk threads causing it to initialize async and cause errors.
This would then break the server and chunk saving.
So ensure its loaded at start of server to avoid this.
Still needs front end changes to see it yet though.
1) Adds Game Rules per world
2) Adds View distances per world
3) Removes extra garbage on lambda task names
4) Adds more memory information such as native load
5) Adds load average for non crap operating systems.
6) Fixes online mode showing false when privacy=true
7) Adds Data packs loaded
Switch to a standard fixed size ThreadPoolExecutor as we don't use the
advanced capabilities of a ForkJoinPool.
ForkJoinPool does not allow single threads, and really rather not use
2 different executor types based on core count.
Also, change thread priorities so that main thread is prioritized by
the OS at a higher priority than the other threads. May not help too much
but it at least signals the OS the information to know main is more important.
Locks dimension manager to the first world its used with.
WE is creating a temp world and the world ref on that manager
is getting changed to the temp world.
This would of also caused a memory leak of that temp world too.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
My recent work on serialization is now in CraftBukkit so was able to drop the patch and Paper
is now consistent with upstream.
Bukkit Changes:
e2699636 Move API notes to more obvious location
CraftBukkit Changes:
1b2830a3 SPIGOT-4441: Fix serializing Components to and from Legacy
This should now complete legacy serialization to avoid ever
changing the output content.
This removes the concept of "Default Color" from the method as
that entire concept was flawed and broke the intent of chat components.
Going to actually PR this patch to Spigot soon.
This now puts us back at a point where any data saved pre Spigot
breaking things will still save back the exact same way as before,
but new component -> legacy will now be fixed to not insert undesirable
default colors (such as black) into the legacy string, and instead use
the proper reset code.
This means you can now safety get the text from a book and
put it in chat or an entity display name without worry about black
color codes or other undesired color codes leaking into the new
context where that color doesn't make sense.
This brings chat componenent serialization to 100% accuracy so
that any text input in the legacy format, converting to comps and
then back to legacy will result in identical results.
If the user explicitly sets a color as prefix to a string, it is retained,
even if that color matches the default.
This also helps improve dealing with the empty string wrappers Bukkit creates.
A unit test has been added to verify this behavior.
This patch fixes the serialization of display names, item lores and
other things which use strings with color codes. The old implementation
deleted the color codes at the beginning of the resulting string if it
matched the default color passed to the conversion function. This
resulted in items having a black display name losing the black color
code in the beginning of the text when the item was serialized (e.g.
saving an ItemStack in a Yaml config).
Spigot has now made the issue worse and expanded the scope to more places.
1) Improve frustum to look more at the near chunks and frontal chunks only instead of 1 large single look up.
2) Delay adding 33 tickets based on view distance and lower their task priority. This will slower roll out the spiral
3) Chunks behind the player have additional delay on loading, favoring chunks in front of the player.
This has benefit that if faster traveling, some of the chunks will be cancelled / not loaded.
This should reduce pressure on chunk loading, as well as reduce loading/unloading unnecessary chunks while moving.
When a chunk is loaded from disk that has already been generated,
the server has to promote the chunk through the system to reach
it's current desired status level.
This results in every single status transition going from the main thread
to the world gen threads, only to discover it has no work it actually
needs to do.... and then it returns back to main.
This back and forth costs a lot of time and can really delay chunk loads
when the server is under high TPS due to their being a lot of time in
between chunk load times, as well as hogs up the chunk threads from doing
actual generation and light work.
Additionally, the whole task system uses a lot of CPU on the server threads anyways.
So by optimizing status transitions for status's that are already complete,
we can run them to the desired level while on main thread (where it has
to happen anyways) instead of ever jumping to world gen thread.
This will improve chunk loading effeciency to be reduced down to the following
scenario / path:
1) MAIN: Chunk Requested, Load Request sent to ChunkTaskManager / IO Queue
2) IO: Once position in queue comes, submit read IO data and schedule to chunk task thread
3) CHUNK: Once IO is loaded and position in queue comes, deserialize the chunk data, process conversions, submit to main queue
4) MAIN: next Chunk Task process (Mid Tick or End Of Tick), load chunk data into world (POI, main thread tasks)
5) MAIN: process status transitions all the way to LIGHT, light schedules Threaded task
6) SERVER: Light tasks register light enablement for chunk and any lighting needing to be done
7) MAIN: Task returns to main, finish processing to FULL/TICKING status
Previously would have hopped to SERVER around 12+ times there extra.
Mojang has flaws in their logic about chunks being concurrently
wrote to. So we constantly see crashes around multiple threads writing.
Additionally, java has optimized synchronization so well that its
in many times faster than trying to manage read wrote locks for low
contention situations.
And this is extremely a low contention situation.
Fixes#3293Fixes#2493
I'm hoping the other fix in 324 for the level map getting corrupted
fixes the real issue and this isn't needed anymore, but i suspect it is
will wait until more study can be done though.
Fixes#3469
We must check the level tracker as ticket levels add "virtual"
tickets to neighbors.
Also added neighbor tracking during generation to be extra safe.
Fixes#3465Fixes#3451Fixes#3459
Mojang implemented a cache like chunks have, but this cache
is accessed by multiple threads and is totally not safe.
So just remove it
Fixes#3466
Also missed a pooled nibble release, so slid that in there too.
This change reimplements the entire BehaviorFindPosition method to
get rid of all of the streams, and implement the logic in a more sane way.
We keep vanilla behavior 100% the same with this change, just wrote more
optimal, as we can abort iterating POI's as soon as we find a match....
One slight change is that Minecraft adds a random delay before a POI is
attempted again. I've increased the amount of that delay based on the distance
to said POI, so farther POI's will not be attempted as often.
Additionally, we spiral out, so we favor local POI's before we ever favor farther POI's.
We also try to pathfind 1 POI at a time instead of collecting multiple POI's then tossing them
all to the pathfinder, so that once we get a match we can return before even looking at other
POI's.
This benefits us in that ideally, a villager will constantly find the near POI's and
not even try to pathfind to the farther POI. Trying to pathfind to distant POI's is
what causes significant lag.
Other improvements here is to stop spamming the POI manager with empty nullables.
Vanilla used them to represent if they needed to load POI data off disk or not.
Well, we load POI data async on chunk load, so we have it, and we surely do not ever
want to load POI data sync either for unloaded chunks!
So this massively reduces object count in the POI hashmaps, resulting in less hash collions,
and also less memory use.
Additionally, unemployed villagers were using significant time due to major ineffeciency in
the code rebuilding data that is static every single invocation for every POI type...
So we cache that and only rebuild it if professions change, which should be never unless
a plugin manipulates and adds custom professions, which it will handle by rebuilding.
Some plugins are doing really really bad things to worlds breaking the
ability to send sounds to some users.
So creating another reference to the player chunk map that plugins wont be breaking, and
print a stack trace at world creation if we ever get an expected world state to identify
who is doing it!
If we encounter this illegal state, we fall back to the old method of sending sounds, so
sending sounds will still work, just less effecient.
Spigot made structure start not load chunks, but forgot to null check
the result...
This likely never blew up before due to the chunk leak issue, but now
that leaky chunks are cleaned up, it was identified.
While last was mostly there, still had some slight risk of unloading
before it was fully finished.
So just going to bump the delay to 3 minutes to be safe. Better than
forever at least.
Was really hoping we could unload them as soon as they were done to
any memory prematurely promoting to old generation, but guess we can't.
A chunk was loaded but not yet finished in use and was unloaded too early.
This caused it to be reloaded again or caused crashes.
Now also check if the chunk pops out of the unload queue that it also
doesn't now have a ticket either.
Due to some complexity in mojangs complicated chain of juggling
whether or not a chunk should be unloaded when the last ticket is
removed, many chunks are remaining around in the cache.
These chunks are never being targetted for unload because they are
vastly out of view distance range and have no reason to be looked at.
This is a huge issue for performance because we have to iterate these
chunks EVERY TICK... This is what's been leading to high SELF time in
Ticking Chunks timings/profiler results.
We will now detect these chunks in that iteration, and automatically
add it to the unload queue when the chunk is found without any tickets.
Spigot inserted their Slack Activity Accountant in the wrong location
resulting in a chunk being removed from the unload queue, inserted into
the unload map, but never calling the function to finish the removal....
This caused the chunk to become stuck in the unload map if ever hit, because
the unload map was meant to be a TEMPORARY location while it was saving.
Fix this by abort iteration AFTER the current chunk is finisehd processing
Also, improve how aggressive we are at unloading chunks, targetting 10% per tick instead.
These saves are asynchronous so there should be less of a hit here.
This is for 2 reasons:
1) Ensuring our log4j is mostly loaded at OUR version.
I've seen stack traces with line numbers that do not match our version. This means that some
plugin has shaded in log4j and their loaded version is mixing with ours....
So by at least trying to load a bunch of log4j classes before we load plugins, we can be
more sure mixed versions are not loading.
2) If the jar file is replaced while the server is runnimg class not found errors galore
This will preloaod a bunch of classes commonly seen to error during shutdown due to this.
The goal here is to help let the server shutdown gracefully as possible. Some plugins will
still blow up here if they access a class that hadn't been loaded yet, but goal is to at least
stop freezing the shutdown process as it does with JLine and Log4j errors requiring an external kill.
Ideally you should not replace jars while the server is running, but it is something that happens in
development for testing.
Updated test server to do a copy though to avoid this happening in Paper development.
Accidently used the snapshot, as well as needed to exclude the
existing old 25 build of libnetty so that we load the newer build of
epoll instead.
Thanks to 56738 for the catch.
2 people had issues where some plugin is doing some reallly insane NMS hackery
that created invalid worlds, which caused some errors...
Really don't understand what in the world they did, but putting in a dumb guard that
shouldn't even be necessary to just not send the sound effect rather than erroring.
While this method has async in it's name, it's not actually meant
to be called asynchronously.... It just means IT will load the chunk
asynchronously without blocking main.
So fix this so that if a plugin calls it async, it forces the request back to main thread.
Minecraft's Netty version was severely out of date. There has been
numerous security fixes, bug fixes, and likely performance fixes
since the version Minecraft uses (4.1.25).
This fixes some known issues with "Closed Channel" spam.
Fixes#3388
Instead of using the entire world or player list, use the distance
maps to only iterate players who are even seeing the chunk the packet
is originating from.
This will drastically cut down on packet sending cost for worlds with
lots of players in them.
Closes#3437
Any full status chunk that was requested for any status less than full
would hold onto their entire nbt tree and every variable in that function.
This was due to use of a lambda that persists on the Chunk object
until that chunk reaches FULL status.
With introduction of no tick, we greatly increased the number of non
full chunks so this was really starting to hurt.
We further improve it by making a copy of the nbt tag with only the memory
it needs, so that we dont have to hold a copy to the entire compound.
This should help greatly (as long as this change works...) in
understanding an exception when it doesn't get truncated with
"... and 14 more" at a vital point of the stack trace.
Fixed issues where urgent and prioritized chunks didn't actually
always get their priority boosted correctly....
Properly deprioritize non ticking chunks.
Limit recursion on watchdog prints to stop flooding as much
Remove neighbor priorities from watchdog to reduce information
reduce synchronization duration so that watch dog won't block main should main actually wake up
probably fixed a deadlock risk in watchdog printing also that was leading to crashes
fixed chunk holder enqueues not being processed correctly
added async catchers in some locations that should not be ran async
Fixed upstream bug where VITAL callbacks that must run on main actually could
sometimes run on the server thread pool causing alot of these nasty bugs we've seen lately!
This build will provide massive improvements to stability as well as even faster
sync chunk load/gens now that priority is correctly set.
Fixes#3435
The nibble pooling for NBT Tags was 'semi' leaked from loaded chunks
as we store the NBT Tag of Tile Entities in a Chunk, but don't process
them and remove them until chunk reaches Entity Ticking status....
This caused some phantom references to persist causing high memory use
of these chunks.
So I just got rid of pooling from NBT deserialization and we'll have to
take the hit on memory allocations there because too many cascading concerns
with anyone using NBT Tag Byte Arrays.
Fixes#3431
I believe this brings us back to stable. A lot of complexity was
learned about juggling priorities.
We were essentially promoting more chunks to urgent than really
needed to be urgent.
So this commit adds a lot more logic to juggle neighbor priorities
and demote their priority once they meet the requirements needed of
them.
This greatly improves the performance of "urgent" chunks".
Fixes#3410Fixes#3426Fixes#3425Fixes#3416
CB only protected from > 64 but there's no reason an entity should ever
be more than 2x its width or 1x height as the BB is supposed to represent
the entity size.
BB is / 2 to calculate position.
Blow up if a plugin tries to mutate visibleChunks directly and prevent them
from doing so.
Also provide a safe get call if any plugins directly call get on it so
that it uses the special logic to check pending.
Also restores ABI for the visibleChunks field back to what it was too.
Additionally, remove the stack trace from Timings Stack Corruption for any
error thrown on Minecraft Timings, and tell them to get the error ABOVE this
instead, so people stop giving us useless error reports.
Also fixes a memory leak when the source map down sizes but dest map didn't,
which resulted in lingering references to old chunk holders.
Fixes#3414
synchronized arraydeque ends up still being way faster.
Kinda shocked how much that strategy was using, it wasn't really
that complicated... but oh well, this is even simpler and not
seeing blocked threads show up at all in profiling because
the lock is held for such a short amount of time.
also because most uses are on either server thread pool or chunk load pool.
Also optimize the pooling of nibbles to not register Cleaner's
for Light Engine directed usages, as we know we are properly
controlling clean up there, so we don't need to rely on GC.
This will return them to the pool manually, saving a lot of Cleaners.
Closes#3417
Fixed a few bugs, and made numerous improvements.
Fixed issue where a sync chunk load could have its ticket removed and the
priority ticket could expire...
Still not perfect there but better than before.
Also fixed few other misc issues such as watchdog cpu usage, chunk queue update
had risk of double enqueue due to it no longer being a set.
Added much more information about chunk state to watchdog prints.
I see some more room for improvement even, but this is much better than before.
Fixes#3407Fixes#3411Fixes#3395Fixes#3389
Dynmap accessed the raw bytes because it utilized NBT locally, but the
NBTTagcompound was garbage collected while the bytes were still being used.
This will return getBytes() back to being safe, and add a new PoolSafe method
that will prevent the additional allocations for general chunk loading.
Also fixed applyPatches for people with paths in their working directory
if they have mcdev sources built.
Mark chunks that are blocking main thread for world generation as urgent
Implements a general priority system so that chunks that are sorted in
the generator queues can prioritize certain chunks over another.
Urgent chunks will jump to the front of the line, ensuring that a
sync chunk load on an ungenerated chunk does not lag the server for
a long period of time if the servers generator queues are filled with
lots of chunks already.
This massively reduces the lag spikes from sync chunk gens.
Then we further prioritize loading order so nearby chunks have higher
priority than distant chunks, reducing the pressure a high no tick
view distance holds on you.
Chunks in front of the player have higher priority, to help with
fast traveling players keep up with their movement.
This commit also improves single core cpu scenarios in that we will
now automatically disable Async Chunks as well as Minecrafts thread
pool.
It is never recommended to use async chunks on a single CPU as context
switching will be slower than just running it all on main.
This also bumps the number of server worker threads by default too.
Mojang does not utilize the workers in an effecient manner, resulting
in them using barely any sustained CPU.
So give it more workers so more chunks can be processed concurrently
This change also improves urgent chunk loading, so players flying into
unloaded chunks will hurt a little bit less (but still hurt)
Ping #3395#3363 (Not marking as closed, we need to make prevent moving work)
The expected version should be equal to or newer than the one stored.
Although Aikar claims he did this on accident (and NOT my ligatures!), I
claim this is all a big conspiracy by followers of the Taco cult.
When crossing certain chunk boundaries, the client needlessly
calculates light maps for chunk neighbours. In some specific map
configurations, these calculations cause a 500ms+ freeze on the Client.
This patch basically serves as a workaround by sending light maps
to the client, so that it doesn't attempt to calculate them.
This mitigates the frametime impact to a minimum (but it's still there).
Massively reduces memory allocation of 2048 byte buffers by using
an object pool for these.
Uses lots of advanced new capabilities of the Paper codebase :)
Targets 3072 * 8 buffers per 1GB of heap memory up to a max consideration
of 6GB of heap (any more over 6GB won't give more nibble pool)
You can control the 3072 number by setting -DPaper.nibbleBucketSize=2048
Remember this number is * 8 then * heap memory in GB
That is 98304 objects for 4GB of memory, at 2064 bytes roughly, meaning 194MB
You may also control max number of pooled objects directly instead of any
dynamic calculation using -DPaper.maxNibblePoolSize=1024000
While this will use more old generation by a tad bit, allocation rate will drop
significantly, causing less young generation GC's.
This commit has gone through extensive testing for over a day and confident
it no longer has any issues with light corruption.
This commit doesn't do much on its own, but adds a new Java Cleaner API
that lets us hook into Garbage Collector events to reclaim pooled objects and
return them to the pool.
Adds framework for Network Packets to know when a packet has finished dispatching
to get an idea when a packet is done sending to players.
Rewrites PooledObjects impl to properly respect max pool size and remove
almost all risk of contention.
Bumps the Paper Async Task Queue to use 2 threads, and properly shuts it down on shutdown.
Use a proper teleport for teleporting to entities in different
worlds.
Validate that the target entity is valid and deny spectate
requests from frozen players.
Also, make sure the entity is spawned to the client before
sending the camera packet. If the entity isn't spawned clientside
when it receives the camera packet, then the client will not
spectate the target entity.
This fixes exploits that let players destroy bedrock by Pistons, explosions
and Mushrooom/Tree generation.
These blocks are designed to not be broken except by creative players/commands.
So protect them from a multitude of methods of destroying them.
A config is provided if you rather let players use these exploits, and let
them destroy the worlds End Portals and get on top of the nether easy.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
ffc8e4ca SPIGOT-5716: Clarify documentation of MultipleFacing
CraftBukkit Changes:
d07a78b1 SPIGOT-5716: Clarify documentation of MultipleFacing
46a13860 SPIGOT-5718: Block.BreakBlockNaturally does not reflect tool used
214ffea9 SPIGOT-5727: GameRule doImmediateRespawn cannot be set per-world
Spigot Changes:
2f5d615f SPIGOT-5730: Modernise inventory patch
a2bdb119 SPIGOT-5679: Add config option for end portal activation sound
Closes#3352
I swear the crap that stuff will abuse to make stuff happen is insane.
Hash codes apparently changing behavior of stuff based on its value, so
reverting 2d401d2dfbFixes#3346Fixes#3341
I utilized the IDE to convert streams to non streams code, so shouldn't
be any risk of behavior change. Only did minor optimization of the
generated code set to remove unnecessary things.
I expect us to just drop this patch on next major update and re-apply
it with the IDE again and re-apply the collections optimization.
Optimize collection by creating a list instead of a set of the key and value.
This lets us get faster foreach iteration, as well as avoids map lookups on
the values when needed.
Removed streams from hoppers and also fixed a mistake in the logic.
When this patch was ported to 1.14/1.15, a line of code was put in
the wrong place which disabled a significant portion of the improvement.
Replaced usages of streams in isEmpty and itemstack checks
Replaced usage of streams in pulling loop
Replaced usage of streams in Lootable Inventory isEmpty() check
Only check for refilling Lootable Inventory when accessing first slot, not all
All of these in general were pretty significant hits, so this single commit
is going to cause tacos to magically appear in front of you every day.
🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮
Nom Nom Nom
If you hate taco's, you're not allowed to use this improvement.
Also ignore the renames, pulled a lot of PR's.
Server.reload() had this logic to give time for tasks to shutdown,
however shutdown did not...
Adds a 5 second grace period for any async tasks to finish and warns
if any are still running after that delay just as reload does.
Closes#3337
If anything used setPositionRaw, it left potential for an AABB
to be left stale at their old location, which could cause massive
AABB boxes if movement ever then got called on the new position.
This guarantees any time we set the entities position, we also
update their AABB.
We store a reference to the chunk the entity is currently in, so use it
to more accurately unregister it in chunkCheck
Should maybe fix some entity loss issues.
Obscure detail in that if you teleport right on a chunk line, it
adds +1 to your collision check and will check the unloaded neighbor.
but the call to load the chunk then returned null if it was pending unload, such
as the load we did in Player List
However we want gen=true for players here anyways, so use getType
This also cleans up the implementation of Async Chunks to get rid of most
Consumer callbacks and instead return futures.
This lets us propogate errors correctly up the future chain
(barring one isn't lost even deeper in the chain...)
So exceptions can now bubble to plugins using getChunkAtAsync
While there is more down the collision system, remove some of the wrapping
Spliterator stuff as even this wrapper stream has shown up in profiling.
With other collision optimizations, we might also even avoid inner streams too.
This patch replaces the vanilla collision code for both block and entity collisions with faster implementations by JellySquid, used originally in her Lithium mod.
Optimizes Full Block voxel collisions, and removes streams from Entity collisions
Original code by JellySquid, licensed under GNU Lesser General Public License v3.0
you can find the original code on https://github.com/jellysquid3/lithium-fabric/tree/1.15.x/fabric (Yarn mappings)
Ported by
Co-authored-by: Zoutelande <54509836+Zoutelande@users.noreply.github.com>
Touched up by Aikar to keep previous paper optimizations
The collision code takes an AABB and generates a cuboid of checks rather
than a cylinder, so at high velocity this can generate a lot of chunk checks.
Treat an unloaded chunk as a collision for entities, and also for players if
the "prevent moving into unloaded chunks" setting is enabled.
If that setting is not enabled, collisions will be ignored for players, since
movement will load only the chunk the player enters anyways and avoids loading
massive amounts of surrounding chunks due to large AABB lookups.
Fixes#3321
This was using SIGNIFICANT amounts of memory allocating many
long[]'s for BitSets for every ProtoChunk in the cache that had
been unloaded and reloaded.
This will result in a nice memory reduction.
Actually showed up in profiling as decent time spent here...
Noticed y/z was missing its final that it use to have, when x had it. some how
must of got messed up on some update. though people suggest this shouldn't of
mattered anyways, but lets put it back for safety.
Added cache of hashcode, as well as optimized the hash code using larger primes.
Also stored the long value of the x/y/z so that for equals we can compare a single long,
as well as have that long value cached for .asLong()
This lets you run /paper fixlight <chunkRadius> (max 5) to automatically
fix all light data in the chunks.
Permission node is same "bukkit.command.paper"
Now tracks the full startup time for "Done" message at end, as apparently
Vanillas was done in a place that skipped tracking a lot of code too.
This fixes an issue with ViaVersion
Will now run those tasks just before we print "Done" so that startup
time is appropriately accounted for a plugin, as well as will no longer
trip the watchdog on startup.
Any plugin that tries to bypass this is just going to then trip watchdog
on Spigot too, so don't you dare.
Stop trying to cheat the delay your plugin added to startup time.
This isn't a behavior change because the first thing the tick does....
was run these tasks....
So it's just moving it slightly a few lines to be before a watchdog tick and
to account for it in "Done" time.
Fixes#3294
When adding/removing to a chunk, we need to also look at
editing the loaded entity list.
Co-authored-by: Spottedleaf <Spottedleaf@users.noreply.github.com>
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
da9ef3c5 #496: Add methods to get/set ItemStacks in EquipmentSlots
3abebc9f #492: Let Tameable extend Animals rather than Entity
941111a0 #495: Expose ItemStack and hand used in PlayerShearEntityEvent
4fe19cae #494: InventoryView - Add missing Brewing FUEL_TIME
CraftBukkit Changes:
933e9094#664: Add methods to get/set ItemStacks in EquipmentSlots
18722312#662: Expose ItemStack and hand used in PlayerShearEntityEvent
Removes synchronization from sending packets
Makes normal packet sends no longer need to be wrapped and queued like it use to work.
Adds more packet queue immunities on top of keep alive to let the following scenarios go out
without delay:
- Keep Alive
- Chat
- Kick
- All of the packets during the Player Joined World event
Hoping that latter one helps join timeout issues more too for slow connections.
Removes processing packet queue off of main thread
- for the few cases where it is allowed, order is not necessary nor
should it even be happening concurrently in first place (handshaking/login/status)
Ensures packets sent asynchronously are dispatched on main thread
This helps ensure safety for ProtocolLib as packet listeners
are commonly accessing world state. This will allow you to schedule
a packet to be sent async, but itll be dispatched sync for packet
listeners to process.
This should solve some deadlock risks
This may provide a decent performance improvement because thread synchronization incurs a cache reset
so by avoiding ever entering a synchronized block, we get to avoid that, and packet sending is a really
hot activity.
Undo the accidental renaming of a method in 0aad8bf
Aikar wanted to rename DataPalette#getDataBits(T object) to getOrCreateIdFor
in 0aad8bf but he also accidentally renamed
ChunkPacketInfo#getDataBitsIndex(int chunkSectionIndex) to
getOrCreateIdForIndex.
Remove chunk-edge-mode and chunk loading entirely from Anti-Xray
The chunk-edge-mode is broken since several versions.
Loading chunk neighbors for chunk edge obfuscation isn't needed anymore.
Unlike in previous versions, these are under normal circumstances already loaded
at the time we need them (plugins for example can bypass this).
Use the modified methods and constructors everywhere
Anti-Xray provides support for the default nms methods and constructors,
which where modified by Anti-Xray to avoid breaking stuff (plugins)
which somehow uses these methods.
However, the modified versions of those methods and constructors should be used
where possible.
Increases risk of deadlock if a plugin using protocollib sends a packet
async, and then a listener then reads world state, and main thread is then
blocked waiting for the queue to flush.
This will break out of the synchronized block when it jumps to the netty event loop.
See: https://gist.github.com/aikar/e7abb2ba7059149d0a91f7a226e98590
Java 9+ doesn't allow using the exposed cleanup method, but added
a new method on Unsafe to do it.
So have to detect java version and use the appropriate strategy.
See: https://www.evanjones.ca/java-bytebuffer-leak.html
This is potentially a source of lots of native memory usage.
We are clearly seeing native usage upwards to 1-4GB which doesn't make sense.
Region File usage fixed in previous patch should of tecnically only been somewhat
temporary until GC finally gets it some time later, but between all the various
plugins doing IO on various threads, this hidden detail of the JDK could be
keeping long lived large direct buffers in cache.
Set system properly at server startup if not set already to help protect from this.
Mojang was semi leaking native memory here by relying on finalizers
to clean up the direct memory.
Finalizers have no guarantee on when they will be ran, and since this is
old generation memory, it might be a while before its called.
This method shows up as super hot in profiler, and also a high "self" time.
Upon analyzing, it appears most usages of this method fall down to the final
else statement of the nasty ternary.
Upon even further analyzation, it appears then the majority of those have a
consistent list 1.... One with Infinity head and Tails.
First optimization is to detect these infinite states and immediately return that
VoxelShapeMergerList so we can avoid testing the rest for most cases.
Break the method into 2 to help the JVM promote inlining of this fast path.
Then it was also noticed that VoxelShapeMergerList constructor is also a hotspot
with a high self time...
Well, knowing that in most cases our list 1 is actualy the same value, it allows
us to know that with an infinite list1, the result on the merger is essentially
list2 as the final values.
This let us analyze the 2 potential states (Infinite with 2 sources or 4 sources)
and compute a deterministic result for the MergerList values.
Additionally, this lets us avoid even allocating new objects for this too, further
reducing memory usage.
We've seen many a cases where the "last good" x/y/z is desynced from
the x/y/z that is checked for moving too fast.
Theory is that when you have multiple movement packets queued up,
and the player is teleported after the first then the 2nd and 3rd come in,
it is triggering a massive movement velocity.
This will ensure that the servers position is synchronized anytime player is te
Fixes#3258
It was still technically read correctly in what it was doing, but
all our Player events begin with Player.
Nothing uses this event yet so safe to rename.
If you are some rapid adopter of this event, sorry :P
If a server enables Anti Xray, packet sending can be delayed until the
chunk has been obfuscated, blocking the entire queue from going out.
On a busy server, considering Anti Xray can only operate on a single
thread, it is quite possible the obfuscation backlog can get quite behind
resulting in a delay of sending packets.
And logging in is a clear area where lots of chunks are going to be queued
for obfuscation....
We should probably special case a few more than this (such as chat),
but this will hopefully help the keep alive issues some people run into.
Now has separate configs to control Villager immunities a bit.
whether or not they wake up due to panic situations (raids)
and when should they wake up when work is available after being
inactive for so long, and for how long.
This work config may make the 'wake up inactive' feature for villagers
useless in most scenarios, but if there is a situation where the villager
does go without needing to work for a long period of time, it would kick
in then.
This also removes movement based immunities, so now villagers should only move
if they trigger a work immunity, panic immunity, or inactive wake up immunity.
Fixes#3263
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
b999860d SPIGOT-2304: Add LootGenerateEvent
CraftBukkit Changes:
77fd87e4 SPIGOT-2304: Implement LootGenerateEvent
a1a705ee SPIGOT-5566: Doused campfires & fires should call EntityChangeBlockEvent
41712edd SPIGOT-5707: PersistentDataHolder not Persistent on API dropped Item
If a sync load was triggered, it would process pending join events,
causing them to be added to the world in the middle of the entity ticking
process.
This caused their add to be queued instead of immediate, causing
"Illegal Tracking" errors.
This schedules it to fire at the players next Connection Tick, which
is exactly where this entire process use to run anyways.
Also added missing tab complete and syntax for syncloadinfo debug command
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
220bc594 #486: Add method to get player's attack cooldown
21853d39 SPIGOT-5681: Increase max plugin channel size
5b972adc Improve build process
b55e58d9 Note which custom generator is missing required method
CraftBukkit Changes:
893ad93b#650: Add method to get player's attack cooldown
ef706b06#655: Added support for the VM tag jansi.passthrough when processing messages sent to a ColouredConsoleSender.
e0cfb347 SPIGOT-5689: Fireball.setDirection increases velocity too much
94cb030f SPIGOT-5673: swingHand API does not show to self
b331a055 SPIGOT-5680: isChunkGenerated creates empty region files
e1335932 Improve build process
a8ec1d60 Add a couple of method null checks to CraftWorld
ce66f693 Misc checkstyle fixes
8bd0e9ab SPIGOT-5669: Fix Beehive.isSedated
Spigot Changes:
2040c4c4 SPIGOT-5677, MC-114796: Fix portals generating outside world border
ab8f6b5a Rebuild patches
e7dc2f53 Rebuild patches
This is the start of a new module for Paper to add support for API's
that interface Mojang API's directly.
This allows us to version properly by MC version incase Mojang makes any major breaking changes.
It also lets us separate Mojang API's from Paper-API so our downstream friends at Glowstone
will not have to worry about Mojang code.
Adds AsyncPlayerSendCommandsEvent
- Allows modifying on a per command basis what command data they see.
Adds CommandRegisteredEvent
- Allows manipulating the CommandNode to add more children/metadata for the client
Calling this 2.0 as it's a pretty major improvement with more knobs to twist.
This update fixes many things. The goal here is to restore vanilla behavior to some degree.
Instead of permanent inactive pools of animals, let them show some signs of life some....
Yes this may reduce performance compared to before, but I hope it is minimal. Got to find a balance.
Previous EAR logic really compromised vanilla behavior of mobs. This tries to restore it.
Changes:
1) All monsters are now classed as Monster. Mojang has an interface, we should use it.
- This now includes Shulker, Slimes, see #2 for Phantom and Ghast
2) Villagers and Flying Monsters now have their own separate activation range configs.
- Villagers will default to your Animals config
3) Added a bunch of more immunities
- Brand new entities are immune for a few seconds
- Entities that recently traveled by portal are immune for few seconds
- Entities that are leashed to a player are immune
- Ender Signals are immune
- Entities that are jumping, climbing, dying (lol) are immune
- Minecarts are now always immune to the movement restriction
4) Villagers immunity received major overhaul...
- Now has many immunities for Villager activities to let them
do their work then go back inactive
- Such as interacting with doors and workstations should be more normal now
- Raids will trigger immunities, in that villagers will run and hide when bell rings.
- Raid should keep the entire village immune during the raid to keep gameplay mechanics
You can disable raids by game rule if you dont want raids
Then the big one.....
Wake Up Inactive Entities:
One issue plagueing "farms" is that we no longer even let entities move now.
Entities become lifeless.
A new system has been introduced to wake up inactive entities every so often, to let
them stretch their legs, eat some food, play with each other and experience the good entity life.
Animals, Villagers, Monsters (Includes Pillagers), and Flying Monsters will now wake up every
so often after staying inactive for a very long. This grants them a temporary immunity, that
the goal is they will then find "stuff to do" by having a longer activity window.
How many to wake up, how often they wake up, and for how long they wake up are all configurable.
Current EAR Immunities really don't give some entities enough of a window to find work
to then keep them immune for the work to even start. This system should help that.
We will only wake up a few entities per tick on the first wave, restoring 1 per type per world per tick.
So say you have 10 monsters qualify for inactive wake up, all 8 will wake up on the first eligible tick,
and then the 9th will wake up on next tick, 10th on next tick.
If for 5 ticks no more inactive wake up, our buffer will have built back up to 5, and then 5 can go next needed tick.
This basically incrementally wakes them up, preventing too many from waking up in a single tick, to reduce impact to TPS.
This was missing Entity Tracking Range support, creating different
values in this section vs normal section.
Concerned this might of caused some carnage on tracker if this code says
"Yes you should track this player 500 blocks away from you on a horse" and then
the other check uses the normal value.
Set:
settings:
- use-optimized-ticklist: false
If you are having issues with block updates and want to see if this fixes it.
Please report confirmations on #3145 ticket
This is friendlier to plugins as far as the plugin is concerned,
the inventory did open and immediately closed.
We avoid sending the packet to client so they don't see the window
flash either.
If a plugin wants to avoid wasteful fake opens, they should check
that the player is not sleeping before opening the inventory.
Renames a bunch of timings to be more appropriate for the new environment.
Many things dealt with sync loads which wasn't correct anymore.
adjusted timings to be a little bit more accurate here.
Also cleaned up old 1.13 async chunks configs so people won't keep
thinking they can change some of those configs when they can't.
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes#3220
This notably fixes the newest "Donkey Dupe", but also fixes a lot
of dupe bugs in general around nether portals and entity world transfer
We also fix item duplication generically by anytime we clone an item
to drop it on the ground, destroy the source item.
This avoid an itemstack ever existing twice in the world state pre
clean up stage.
So even if something NEW comes up, it would be impossible to drop the
same item twice because the source was destroyed.
This should make us more forward proof on preventing dupes.
These dupes have been in for years at this point, they aren't new...
Everyone knows about them and are mitigating with plugins atm breaking gameplay.
so better to make it clear its fixed in the messaging.
I am submitting this to Mojang.
We still keep vanilla process of waiting for existing session to be removed before logging in
by storing a separate map of pending.
also fire the callback using executor incase further recursion causes any trouble
* Don't check for Entities with Inventories if the block above us is also occluding (not just Inventoried)
* Remove Streams from Item Suck In and restore restore 1.12 AABB checks which is simpler and no voxel allocations (was doing TWO Item Suck ins)
* Restore missing application of previous optimization to getEntities for Inventoried Entities from CullanP
* Use getChunkIfLoadedImmediately for getting loaded entities (faster/simpler, no risk of sync loads)
I feel sorry for those who need to do this, and now feel sorry more
since back to slow startups again.
There is keep-spawn-loaded-range in paper.yml to reduce the range to
mitigate this if you must keep async chunks off.
Bump chunk priority to ensure chunks load fast
Handle case where client disconnects before they even fire PlayerJoinEvent
- no longer call PlayerQuitEvent or print quit message.
- don't save the player data file if never joined. Nothing has changed.
CraftBukkit has a bug here that if you do save it, you will lose
any horse that the player logged off on because the horse hasn't
been resummoned yet.
ChunkMapDistance polls multiple entries for pendingChunkUpdates
Each of these have the potential to move a chunk in and out of
"Loaded" state, which will result in multiple callbacks being
needed within a single tick of ChunkMapDistance
Use an ArrayDeque to store this Queue
This event is called when processing a player's attack on an entity
right before their attack strength cd is reset, there are no existing
events that fire within this period of time so it was impossible to
capture the players attack strength via API prior to this commit.
The event is cancellable, which will just skip over the normal reset of
attack strength cd
This change lets players who are in their bed have a position which is above
ground for a longer period of time. This is because of the server not setting
their position to the ground/exit location when entering the bed, resulting in
the server believing they're still in the air.
Because we moved entity registration to occur before the PlayerJoinEvent occurs,
We started tracking the entity too early before it was registered to the client.
So delay tracking until after list packets have been sent.
No longer will trigger Synchronous Chunk Loads when a player logs
in to the server.
Will delay PlayerJoinEvent until the chunk has been loaded.
Should have massive performance benefits for larger servers with
lots of players logging in and out.
Confused on this one, as commit history says Spigots version is older
than our version, so i'm not sure how we ended up duplicating this when
the 2 events are 100% identical.
Subclass spigots event and rely on the inheritance system, and clean up
the duplicate event fires.
Fix Spigots setPosition to use setPositionRaw to avoid chunk load prematurely.
For years, plugin developers have had to delay many things they do
inside of the PlayerJoinEvent by 1 tick to make it actually work.
This all boiled down to 1 reason why: The event fired before the
player was fully ready and joined to the world!
Additionally, if that player logged out on a vehicle, the event
fired before the vehicle was even loaded, so that plugins had no
access to the vehicle during this event either.
This change finally fixes this issue, fully preparing the player
into the world as a fully ready entity, vehicle included.
There should be no plugins that break because of this change, but might
improve consistency with other plugins instead.
For example, if 2 plugins listens to this event, and the first one
teleported the player in the event, then the 2nd plugin actually
would be getting a valid player!
This was very non deterministic. This change will ensure every plugin
receives a deterministic result, and should no longer require 1 tick
delays anymore.
Appending to the tail of the chunk tasks leaves a
window for the chunk to be moved to a
non-ticking status.
Additionally, use CB's callback executor so we
can ensure that we are not incorrectly
scheduling.
See: https://gist.github.com/aikar/dd22bbd2a3d78a2fd3d92e95e9f28dc6
as part of post processing a chunk, we can call ChunkConverter.
ChunkConverter then kicks off major physics updates, and when blocks
that have connections across chunk boundries occur, a recursive risk
can occur where A updates a block that triggers a physics request.
That physics request may trigger a chunk request, that then enqueues
a task into the Mailbox ChunkTaskQueueSorter.
If anything requests that same chunk that is in the middle of conversion,
it's mailbox queue is going to be held up, so the subsequent chunk request
will be unable to proceed.
We delay post processing of Chunk.A() 1 "pass" by re stuffing it back into
the executor so that the mailbox ChunkQueue is now considered empty.
This successfully fixed a reoccurring and highly reproduceable crash
for heightmaps.
If the request to shut down the server is received while we are in
a watchdog hang, immediately treat it as a crash and begin the shutdown
process. Shutdown process is now improved to also shutdown cleanly when
not using restart scripts either.
If a server is deadlocked, a server owner can send SIGHUP (or any other signal
the JVM understands to shut down as it currently does) and the watchdog
will no longer need to wait until the full timeout, allowing you to trigger
a close process and try to shut the server down gracefully, saving player and
world data.
Previously there was no way to trigger this outside of waiting for a full watchdog
timeout, which may be set to a really long time...
Additionally, fix everything to do with shutting the server down asynchronously.
Previously, nearly everything about the process was fragile and unsafe. Main might
not have actually been frozen, and might still be manipulating state.
Or, some reuest might ask main to do something in the shutdown but main is dead.
Or worse, other things might start closing down items such as the Console or Thread Pool
before we are fully shutdown.
This change tries to resolve all of these issues by moving everything into the stop
method and guaranteeing only one thread is stopping the server.
We then issue Thread Death to the main thread of another thread initiates the stop process.
We have to ensure Thread Death propagates correctly though to stop main completely.
This is to ensure that if main isn't truely stuck, it's not manipulating state we are trying to save.
Also check class loader cache before locking to speed up cached hits to avoid the lock
wasn't gonna make a unique build just for that but can lump it in here.
Very few entities actually hard collide, so store them in their own
entity slices and provide a special getEntites type call just for them.
This reduces entity collision checking impact (in my testing) by 25%
for crammed entities (shove 130 cows into an 8x6 area in one chunk).
Less crammed entities are likely to show significantly less benefit.
Effectively, this patch optimises crammed entity situations.
A players previous block break location is held onto permanently, and if
an interact event is cancelled, the client sends a stop breaking block packet
This then tries to update client about that old location.
This old location might then be in a now unloaded chunk, and it caused it to load.
We now also clear reference to it once abort destroy block is ran to stop trying
to send updates about the old block anyways.
I had did a few of the operations myself, which would have broken chunkCheck
from doing it itself, which would leave some state left in the original chunk
and thats not good....
If the request to shut down the server is received while we are in
a watchdog hang, immediately treat it as a crash and begin the shutdown
process. Shutdown process is now improved to also shutdown cleanly when
not using restart scripts either.
If a server is deadlocked, a server owner can send SIGUP (or any other signal
the JVM understands to shut down as it currently does) and the watchdog
will no longer need to wait until the full timeout, allowing you to trigger
a close process and try to shut the server down gracefully, saving player and
world data.
Previously there was no way to trigger this outside of waiting for a full watchdog
timeout, which may be set to a really long time...
Leaf informed me this could cause ordering issues.
So, the risk if this occurring is lowered now anyways, but if an
entity causes a sync chunk load, it could process an unload...
We will tackle the problem better in a future commit
Also fixed another async-chunks=false issue
This will help prevent many cases of unregistering entities during entity ticking
Currently delays Chunk Unloads and Async Chunk load callbacks
Also dropped mid ticking chunk tasks during entity ticking to reduce this risk
Previous method only worked for a normal shutdown, and didn't include
when the server enters a closing state due to watchdog crashes
This is the correct variable to detect the server is in the middle of shutdown process
The streams hurt performance and allocate tons of garbage, so
replace them with the standard iterator.
Also optimise the stream.anyMatch statement to move to a bitset
where we can replace the call with a single bitwise operation.
This fix is for the few people who are using such low end systems that
asynchronous chunk loading hurts them rather than helping.
The previous build made paper crash if you turned off async chunks, and
this fixes that issue.
Mark chunks that are blocking main thread for world generation as urgent
Implements a general priority system so that chunks that are sorted in
the generator queues can prioritize certain chunks over another.
Urgent chunks will jump to the front of the line, ensuring that a
sync chunk load on an ungenerated chunk does not lag the server for
a long period of time if the servers generator queues are filled with
lots of chunks already.
This massively reduces the lag spikes from sync chunk gens.
This is also a precursor to my next improvement to prioritize chunks
in front of the player (Frustum Priorization)
In most cases, this change won't benefit much. However, there
exists the possibility that your Chunk Task threads are all busy
doing super slow work such as converting chunks.
If this occurs, the main thread blocking tasks, even at highest priority,
has to wait for some thread to become available.
This change gives us a waiting thread used only for main thread blocking
tasks, as well as an increased thread priority level, so that the OS
will give priority to this thread over the other threads.
This is more about guarantees, and won't be any real performanc boost
to anyone who has low or fast activity on their chunk tasks anyways.
But not all of us force upgrade our worlds, and this can be a life saver.
also reordered some patches because multiple PR's were merged.