In the case that the entity has a null callback, it means the
entity has not been added to the world - so, we should treat
it the same as entity#isRemoved.
This may be useful for plugins which want to perform operations
over large areas outside of the buffer zone provided by the
regionaliser, as it is not guaranteed that anything outside
of the buffer zone is owned. Then, the plugins may use
the schedulers depending on the result of the ownership
check.
This event allows plugins to perform synchronous operations before
any region will tick. Plugins will not have to worry about the
possibility of a region ticking in parallel while listening
to the event.
- Add thread check for loadChunk
- Make isChunkGenerated use the region task queue to schedule
to "main"
- Don't complete async chunk future if not in the owning thread
for the chunk
Plugins must add "folia-supported: true" to their plugin.yml
otherwise the server will refuse to load them.
Since Folia is a major breakage for plugins, the vast majority
of plugins will not function correctly on Folia. To prevent
user confusion from this, we will refuse to load the plugin
and provide a log indicating why - which will be much
more helpful than some random error log caused by
a breakage.
The generics pose a problem, and the parameter passed to the
Consumer is not needed in API.
Additionally, stop trying to cancel Bukkit scheduler tasks on
plugin disable as the Bukkit scheduler does not work.
We should only iterate over the local region's entities, not the
global entity list to set up the spawner state, as everything else
about the spawner state (player count / chunk count) is regionised.
Additionally, move the last spawner state to regionised state so that
paper mobcaps command functions as expected.
It turns out, the scheduler is good enough right now - the main
bottlenecks to scaling chunk workers is actually the chunk
system locking behavior (mostly schedule lock, but ticket lock
is there too)
Additionally, process ticket updates as well if either
the mid tick logic did anything or whether we processed
any chunk tasks.
We process the mid tick logic at the start to be consistent
with the inbetween task execution logic (which is not implemented),
and we process ticket updates to ensure that any full status changes
are processed from chunk tasks.
The repeated I/O of creating the directory for the regionfile
or for checking if the file exists can be heavy in
when pushing chunk generation extremely hard - as each chunk gen
request may effectively go through to the I/O thread.
The softlock would occur when a dependency tree finished executing
all of its task and searched for the highest dependency tree
to queue tasks from, only to have that such tree be filled
with purged tasks. Because it would select an empty
tree to pull tasks from, it would not select another
tree to execute tasks from as this logic is done after a task
is executed.
The place/portal async function now track entities that have been
removed from the world but have not teleported. When the server
shuts down, these entities will have their passenger tree restored
and re-added to the entity slices at the location they were teleporting to,
or in the case of portals that did not run placeAsync yet,
the location they entered the portal on. This should ensure that
for regular teleports that the entity is placed at its correct
target location, and for portalling to ensure that either
the entity is placed at the portal entrace location (where
they entered) or the portal destination. In any case,
the entity is preserved in a location and will survive
the shutdown.
Additionally, move player saving until after the worlds save. This
is to ensure that the save logic is performed only after
all teleportations have completed.
Fix some other misc issues as well:
- Fix double nether portal creation by checking if a portal exists again
before creating it, fixing a race condition where two entites would portal
and neither would see that the other created a portal.
- Make all remove ticket add an unknown ticket.
In general this behavior is better since it means that unloads will only
ever occur at the next tick, rather than during the tick logic. Thus,
there will be no cases where a chunk is unloaded unexpectedly.
- Do not use fastFloor for calculating chunk position from block position
It is not going to return a good value outside of [-1024, 1024]
- Always perform mid tick update for ticking regionised player chunk loader
If no entities were loaded, no chunks were loaded, and nothing else -
the logic would not have otherwise ran. This fixed some rare cases of
chunks never loading for players after logging in.
We can just synchronise on all of the map data accesses, but
this means we need to be careful about ensuring that no
sync loads occur, otherwise we could block other threads for
long periods of time.
Namely, everything after FEATURES. By creating a dependency
chain indicating what chunks are in use, we can safely
schedule completely independent tasks in parallel. This
will allow the chunk system to scale beyond 10 threads
per world.
Currently this patch needs some more testing.
The simple solution is that we ignore entities/positions that are not
in the current region. Making retrieval of items in inventory
thread-safe is not going to happen.
This is so that it may be accessed concurrently
from many regions.
Additionally, make sure it does not leak memory by limiting
the number of entries it will cache.
Now, all of the sleep status changes are pushed to the global
tick thread. Had to modify the wake up all players routine
to use the task scheduler to ensure the player is woken up
on the right region context.
Fix erroring while crashing on the global tick thread due to
region field being null.
Now, the spawning should be running Vanilla logic; except
that it is calculated per region (which is what per player
was effectively achieving anyways).