Commit Graph

198 Commits

Author SHA1 Message Date
Spottedleaf
09ac612b2f Do not attempt to pathfind into non-owned chunks
Fixes https://github.com/PaperMC/Folia/issues/117
2023-08-08 17:54:27 -07:00
Spottedleaf
3379b89797 Add thread check for NMS block setting
To catch block updates in chunks, currently for performance
reasons loaded chunks do not perform thread checks
2023-08-08 17:49:52 -07:00
Spottedleaf
65667ccd07 Fix race condition on UpgradeData.BlockFixers class init
The CHUNKY_FIXERS field is modified during the constructors
of the BlockFixers, but the code that uses CHUNKY_FIXERS does
not properly ensure that BlockFixers has been initialised before
using it, leading to a possible race condition where instances of
BlockFixers are accessed before they have initialised correctly.

We can force the class to initialise fully before accessing the
field by calling any method on the class, and for convenience
we use values().

Fixes https://github.com/PaperMC/Folia/issues/141
2023-08-08 17:31:01 -07:00
Spottedleaf
3cf16eeaf2 Reset player before running place logic
As part of Folia's place logic, the player's health is sent
after the respawn packet.

Since the health is <= 0.0, this would cause the client to
die again. This would cause the respawn screen to appear again,
and would additionally cause other players to see the player as
dead as well.

There is a small window where this would not have occurred, and
that is where the server would send the correct health before
the client ticks again. This is why the issue was not reproducable
locally, as there was is almost zero delay between those events
on an idle server and on perfect 0ms ping.

Fixes https://github.com/PaperMC/Folia/issues/112
2023-08-07 14:35:18 -07:00
Spottedleaf
b2d7bdb0bb Make loadChunksAsync callback thread-safe
Need to perform synchronisation on the return list to avoid CMEs
2023-07-25 12:00:33 -07:00
Spottedleaf
57983f77f7 Rewrite spawn selection algorithm
The new spawn selection algorithm attempts to search the area
around a selected point, in an effort to reduce the total number
of chunk loads required to select a spawn point.

Additionally, the new spawn selection algorithm does not perform
recursion when the selected area is already loaded and owned by
the current region. This fixes https://github.com/PaperMC/Folia/issues/138
2023-07-25 11:52:46 -07:00
Spottedleaf
daacd42550 Use Folia for bstats version instead of Paper 2023-07-12 18:41:19 -07:00
Spottedleaf
b5fc6d0a12 Correctly handle ender pearl end gateway teleportations
The end gateway is supposed to teleport the person who threw
the ender pearl.

The changes more closely mirror Vanilla behavior. The current
exceptions to Vanilla behavior are:

1. The first teleportation attempt for the end gateway always fails
2. If the ender pearl thrower is riding a vehicle, the thrower is
   dismounted from their vehicle.

I don't see any solutions for #1 right now. The root issue is that
since the end gateway does not have a target location, it has to
search for one. However, it can _fail_ to find a target location,
in which case the teleportation should not occur. Since the search
must take place asynchronously, it requires the entity to be
removed from the world.

For #2, this is because Vanilla's behavior is broken and does not
correctly teleport players riding boats. We can fix this by simply
dismounting the player and teleporting them separately of their boat,
which seems to be what Vanilla is trying to do given it does _not_
try to teleport the root vehicle of the player.

This is a partial fix to https://github.com/PaperMC/Folia/issues/51
2023-07-09 21:47:25 -07:00
Spottedleaf
62b165bd7c Set correct riding position for entity passengers on vehicle move packet
Since Folia moves the connection tick to the beginning of the tick,
the player's position would be incorrectly updated by the move
packet and be used during the tick.

This would cause the player's bounding box to be incorrect, which
would cause incorrect movement collision calculations, such as
colliding with fire.

Fixes https://github.com/PaperMC/Folia/issues/119
2023-07-09 20:40:57 -07:00
Spottedleaf
c0631fd5cd Do not erase job site memory when not in tick thread region
The intention behind erasing the memory was to match the case
where the villager would lose. However, there may not be
any competitors in which case the villager would never lose.

Instead, the new behavior is to behave as if the villager was not
loaded.

Fixes https://github.com/PaperMC/Folia/issues/64
2023-07-09 20:40:44 -07:00
Spottedleaf
bd96e299d4 Update paper 2023-07-06 22:26:45 -07:00
Sofiane H. Djerbi
6b978f2aaf Add Yaw and Pitch to CraftEntity.teleportAsync 2023-07-01 13:51:05 -07:00
Spottedleaf
6e317fd38a Only thread-check addEffect for entities in the world
Resolves possible crashes when adding effects before adding to the
world or when adding from the worldgen threads.
2023-07-01 13:43:32 -07:00
Spottedleaf
bff0370b70 Check region for Vex spell origin
If the Vex portals or is moved far enough away, it may trip
a thread check.

Fixes https://github.com/PaperMC/Folia/issues/95
2023-07-01 13:12:44 -07:00
Spottedleaf
eb2231736b Only update time for local players in time update tick
This was done in 1.19.4, but the diff to use getLocalPlayers()
was dropped by accident.

Fixes https://github.com/PaperMC/Folia/issues/114
2023-07-01 12:18:44 -07:00
Spottedleaf
d1c9e63470 Use teleportAsync for handling cancelled move events
Some plugins are bad and update the `from` position to something
completely garbage. To avoid a crash from this cross-region
teleportation, the teleportAsync function is now used.

The reason the teleport isn't simply ignored is since there may
be legitimate reasons to update the `from` position to something
off-region. This also handles the case where the plugin _uses_
an asynchronous teleport while cancelling the event.

This mirrors the behavior for changing the target destination
but not cancelling the event.

Fixes https://github.com/PaperMC/Folia/issues/115
2023-07-01 12:10:16 -07:00
Spottedleaf
801cff1570 Re-add dropped thread check for retrieving fall position from entities
Additionally, reset the fall position on dimension change.

Fixes https://github.com/PaperMC/Folia/issues/99#issuecomment-1610453068
2023-06-27 19:14:35 -07:00
Spottedleaf
633abb1d50 Optimise regionized save on shutdown
When there are many chunkholders and regions, the cost of collecting
and checking tick thread for each one for every region save
becomes the biggest cost for the save call. To avoid this from
happening, collect the chunk holders from the current region's
owned sections.

This showed significant speedup locally when running the
"walk test" found in RegionizedServer locally (>90% of time
was spent on the holder iteration/checking).
2023-06-27 17:14:06 -07:00
Spottedleaf
81fe50f26f Always synchronise on target for regionized queue mergeInto
While for merging the synchronisation occured, it did not synchronise
for splitting. This resolves a possible CME that may occur while
splitting regions.
2023-06-27 13:43:51 -07:00
Spottedleaf
34039e3709 Fix some issues from Folia test
Player spawn position changing caused any player to log in while
on a boat to trip a thread check. To resolve this, simply reposition
any mounted entities if they are more than 5 blocks away from the player.

In general, this may happen to other entities that are loaded from
chunks as well. In these cases, we can delete the entity if it itself
is not saved in the correct chunk, and we can reposition the mounted
entities if they are not in the correct chunk. For tile entities,
we can simply remove them if they are not in the chunk.

These changes broadly should make loading player/chunk data more
resiliant to bad logic run by plugins.

Also, the Folia test revealed that the chunk generate rate limiter
also affected chunk loading, which was not intended and has been
resolved by exempting already FULL status chunks from the limit.
2023-06-25 14:01:20 -07:00
Sofiane H. Djerbi
0558e7d923 Fix funny respawn animation & respawn button not working 2023-06-25 13:51:15 -07:00
Spottedleaf
7c6e2514d2 Update paper 2023-06-16 10:42:12 -07:00
Spottedleaf
0dd151fd1e Make regioniser more aggressively recalculate regions
This is to try and prevent regions from grouping together
when they really shouldn't be
2023-06-16 10:07:22 -07:00
Spottedleaf
e8e6ac4006 Avoid off-region chunk read for paper entity command
Allow usage of the specific entity command
2023-06-16 10:02:19 -07:00
Spottedleaf
dc7eeddb96 Update paper 2023-06-16 09:48:16 -07:00
Spottedleaf
fa018cc372 Update to 1.20.1
No changes to note
2023-06-13 14:24:31 -07:00
Spottedleaf
db2e6578f8 Clear main supporting position on teleport
It becomes invalid switching dimensions or by moving far. If
it used after teleporting, then it may also trip thread checks.

Fixes https://github.com/PaperMC/Folia/issues/94
2023-06-13 13:52:26 -07:00
Spottedleaf
8a067cdbdd Update leafprofiler to be able to dump to a list of strings
First steps to making this thing useful
2023-06-13 12:21:52 -07:00
Jason Penilla
23b6f9e0ef Fix processUnloads trying to unload for all regions at once 2023-06-12 13:01:23 -07:00
Spottedleaf
9b2ffd03cf Remove unused skyLightSources
Starlight does not use the sky light sources, and there
appear to be issues with TE access on Folia when initializing
them. So, we can just delete it entirely.
2023-06-10 14:10:20 -07:00
Spottedleaf
b886376c26 Update to latest paper
Make sure the player chunk loader throws when a double-remove
occurs, as that should not be happening on Folia
2023-06-10 14:09:11 -07:00
Spottedleaf
fd838ffbee Update to latest paper
Fix two regionizer issues:

In ThreadedRegionizer#addChunk, fix the incorrect handling
of merging two regions where one of the regions had
pending merges. If the first region had pending merges,
and the second was marked as "ready" then the merge would
cause a "ready" region to have pending merges. The fix is
to simply downgrade the "ready" region to "transient,"
as was previously done if the merge was delayed in the
case where the first region was "ticking."

Additionally, prevent the creation of empty regions
by checking if any new sections were created. This would
happen when a section existed, but had no marked chunks
in it AND all of the sections neighbours existed. In these
cases, no region needs to be created as no sections were
created.
2023-06-09 23:44:24 -07:00
Spottedleaf
d0517f1656 Fix compile / boot 2023-06-08 21:03:13 -07:00
Spottedleaf
1b0a5071d0 Initial patch apply 2023-06-08 20:15:55 -07:00
Spottedleaf
bf7ba50dcd Fix missing captureBlockStates usage
Needs to be redirected to the regionized world data.
2023-06-07 16:37:19 -07:00
Spottedleaf
308d1ca5dc Update to latest paper 1.19 2023-06-07 15:32:55 -07:00
Spottedleaf
ca3b7adee2 Resolve issues with player autosave
The last save was based on region tick, but it was not adjusted
on player region change or region merge. To resolve this,
I have adjusted the last save to be based on time so that it does
not need adjustments on region change or region merge.

Additionally, fix the max per tick handling.
2023-05-27 12:14:57 -07:00
Spottedleaf
df0065bd53 Eliminate usages of MinecraftServer#tickCount field
Mobs would use the evenness of server tick count plus id
to determine whether they eoilf tick only their running
goals or to tick the goal selector to find additional
goals. If the server had an even number of regions,
then every 50ms the server tick field would be incremented
by an even number and as a result would not change
the evenness of the mob goal check. This could put
some mobs in a state where they only ticked their
running goals, which would result in them
freezing.

Fixes https://github.com/PaperMC/Folia/issues/42
2023-05-27 09:41:57 -07:00
WillQi
be3c9e596e Fix off region raid heroes 2023-05-16 15:39:40 -07:00
Spottedleaf
f15f1ceab5 Fix some bugs in ThreadedTicketLevelPropagator
First, when a section update is stolen, the thread that acquires
the stolen update should remove the update from the update queue
before returning to mark it as completed and allow other threads
waiting on the update to continue. This fixes a deadlock issue
with section updates.

Fix incorrect decrease queue resize. Previously, it attempted
to resize the _increase_ queue, which is the wrong queue.

Use ALL_DIRECTIONS_BITSET for every decrease queue direction bitset
as decrease propagation cancellation due to neighbour values exceeding
the target decrease value cause some neighbour directions to not
be checked, which causes the final update grid to be incorrect.
2023-05-15 21:29:38 -07:00
Spottedleaf
ed61eb315e Fix infinite loop in ChunkBasedPriorityTask#queue
We must attempt to synchronise when the returned queue is null
so that we can get a correct queue result or return false due to
the reference counter being released, or even to throw an exception
when the queue is null but the reference counter is not released.
2023-05-15 20:46:30 -07:00
Spottedleaf
1128810029 Always recalculate light list on protochunk deserialize
I noticed during my stress testing that the total size of the
light list was far too large, which indicates many duplicates.
For me, this caused many GC problems which made stress testing
harder.

It turns out, it was possible for the light list recalculation
logic to occur _and_ the addition of the light list data from
the NBT data. Since there is no logic to de-duplicate this list,
every chunk load would re-add all light sources into the light
list and the light list would grow uncontrollably.

Since the recalculation logic would often run, I have
decided to solve this by discarding the data on disk and always
just calculating the list from the chunk data alone. Additionally,
I have applied an optimization from Vanilla 1.20 to avoid
searching sections without light sources by first checking the
palette for possible block sources.

Now my stress tests do not have issues with GC at all.
2023-05-15 20:37:29 -07:00
Spottedleaf
94b1400a81 Optimise recalcBlockCounts() for empty sections
In 1.18, every chunk section is initialised to a non-null value
and recalcBlockCounts() is invoked for each section.
However, in a standard world, most sections are empty. In such cases,
recalcBlockCounts() would iterate over ever position - even though
the block data would all be air. To avoid this, we skip
searching the section unless the palette indicates there _could_ be
a non-air block state or non-empty fluid state.

Chunk loading initially showed that recalcBlockCounts() over
sections with a ZeroBitStorage data to to take ~20% of the process,
now it takes <1%.
2023-05-15 20:30:16 -07:00
Spottedleaf
22085eae15 Properly cancel chunk load tasks that were not scheduled
Since the chunk load task was not scheduled, the entity/poi load
task fields will not be set, but the task complete counter
will not be adjusted. Thus, the chunk load task will not complete.

To resolve this, detect when the entity/poi tasks were not scheduled
and decrement the task complete counter in such cases.
2023-05-15 12:27:19 -07:00
Spottedleaf
7595ff6bb2 Mark POI/Entity load tasks as completed before releasing scheduling lock
It must be marked as completed during that lock hold since the
waiters field is set to null. Thus, any other thread attempting
a cancellation will fail to remove from waiters. Also, any
other thread attempting to cancel may set the completed field
to true which would cause accept() to fail as well.

Completion was always designed to happen while holding the
scheduling lock to prevent these race conditions. The code
was originally set up to complete while not holding the
scheduling lock to avoid invoking callbacks while holding the
lock, however the access to the completion field was not
considered.

Resolve this by marking the callback as completed during the
lock, but invoking the accept() function after releasing
the lock. This will prevent any cancellation attempts to be
blocked, and allow the current thread to complete the callback
without any issues.
2023-05-15 11:42:31 -07:00
Spottedleaf
80af54eeda Synchronize PaperPermissionManager
Since multiple regions can exist, there are concurrent accesses
in this class. To prevent deadlock, the monitor is not held
when recalculating permissions, as Permissable holds its own
lock.

This fixes CMEs originating from this class.
2023-05-15 11:00:49 -07:00
Spottedleaf
051ec0dd65 Fix concurrenct access to lookups field in RegistryOps
The concurrent access occurs on the Netty IO threads when
serializing packets. Thus, it seems it was an oversight of
the implementator of this function as there are typically
more than one Netty IO thread.

Fixes https://github.com/PaperMC/Folia/issues/11
2023-05-15 00:26:56 -07:00
Spottedleaf
31b5b1575b Use coordinate-based locking to increase chunk system parallelism
A significant overhead in Folia comes from the chunk system's
locks, the ticket lock and the scheduling lock. The public
test server, which had ~330 players, had signficant performance
problems with these locks: ~80% of the time spent ticking
was _waiting_ for the locks to free. Given that it used
around 15 cores total at peak, this is a complete and utter loss
of potential.

To address this issue, I have replaced the ticket lock and scheduling
lock with two ReentrantAreaLocks. The ReentrantAreaLock takes a
shift, which is used internally to group positions into sections.
This grouping is neccessary, as the possible radius of area that
needs to be acquired for any given lock usage is up to 64. As such,
the shift is critical to reduce the number of areas required to lock
for any lock operation. Currently, it is set to a shift of 6, which
is identical to the ticket level propagation shift (and, it must be
at least the ticket level propagation shift AND the region shift).

The chunk system locking changes required a complete rewrite of the
chunk system tick, chunk system unload, and chunk system ticket level
propagation - as all of the previous logic only works with a single
global lock.

This does introduce two other section shifts: the lock shift, and the
ticket shift. The lock shift is simply what shift the area locks use,
and the ticket shift represents the size of the ticket sections.
Currently, these values are just set to the region shift for simplicity.
However, they are not arbitrary: the lock shift must be at least the size
of the ticket shift and must be at least the size of the region shift.
The ticket shift must also be >= the ceil(log2(max ticket level source)).

The chunk system's ticket propagator is now global state, instead of
region state. This cleans up the logic for ticket levels significantly,
and removes usage of the region lock in this area, but it also means
that the addition of a ticket no longer creates a region. To alleviate
the side effects of this change, the global tick thread now processes
ticket level updates for each world every tick to guarantee eventual
ticket level processing. The chunk system also provides a hook to
process ticket level changes in a given _section_, so that the
region queue can guarantee that after adding its reference counter
that the region section is created/exists/wont be destroyed.

The ticket propagator operates by updating the sources in a single ticket
section, and propagating the updates to its 1 radius neighbours. This
allows the ticket updates to occur in parallel or selectively (see above).
Currently, the process ticket level update function operates by
polling from a concurrent queue of sections to update and simply
invoking the single section update logic. This allows the function
to operate completely in parallel, provided the queue is ordered right.
Additionally, this limits the area used in the ticket/scheduling lock
when processing updates, which should massively increase parallelism compared
to before.

The chunk system ticket addition for expirable ticket types has been modified
to no longer track exact tick deadlines, as this relies on what region the
ticket is in. Instead, the chunk system tracks a map of
lock section -> (chunk coordinate -> expire ticket count) and every ticket
has been changed to have a removeDelay count that is decremented each tick.
Each region searches its own sections to find tickets to try to expire.

Chunk system unloading has been modified to track unloads by lock section.
The ordering is determined by which section a chunk resides in.
The unload process now removes from unload sections and processes
the full unload stages (1, 2, 3) before moving to the next section, if possible.
This allows the unload logic to only hold one lock section at a time for
each lock, which is a massive parallelism increase.

In stress testing, these changes lowered the locking overhead to only 5%
from ~70%, which completely fix the original problem as described.
2023-05-14 19:46:24 -07:00
Jason
9bd857dabc
Undo making JavaPlugin#logger field public (see PaperMC/Paper#9125) (#76) 2023-05-14 18:10:49 -07:00
Spottedleaf
10a11c3712 Do not access POI data for lodestone compass
Instead, we can just check the loaded chunk's block position for
the lodestone block, as that is at least safe enough for the light
engine compared to the POI access. This should make it safe for
off-region access.

Fixes https://github.com/PaperMC/Folia/issues/60
2023-05-13 17:30:55 -07:00