mirror of
https://github.com/PaperMC/Folia.git
synced 2024-11-21 11:55:11 +01:00
Initial README documentation
First draft
This commit is contained in:
parent
282ded3b44
commit
43d0469e36
152
README.md
152
README.md
@ -1,40 +1,126 @@
|
||||
# ForkTest - A Paper fork, using paperweight
|
||||
<div align=center>
|
||||
<p>Fork of <a href="https://github.com/PaperMC/Paper">Paper</a> which adds regionised multithreading to the dedicated server.</p>
|
||||
</div>
|
||||
|
||||
This is an example project, showcasing how to setup a fork of Paper (or any other fork using paperweight), using paperweight.
|
||||
## Overview
|
||||
|
||||
The files of most interest are
|
||||
- build.gradle.kts
|
||||
- settings.gradle.kts
|
||||
- gradle.properties
|
||||
Folia groups nearby loaded chunks to form an "independent region."
|
||||
See [REGION_LOGIC.md](REGION_LOGIC.md) for exact details on how Folia
|
||||
will group nearby chunks.
|
||||
Each independent region has its own tick loop, which is ticked at the
|
||||
regular Minecraft tickrate (20TPS). The tick loops are executed
|
||||
on a thread pool in parallel. There is no main thread anymore,
|
||||
as each region effectively has its own "main thread" that executes
|
||||
the entire tick loop.
|
||||
|
||||
## Tasks
|
||||
For a server with many spread out players, Folia will create many
|
||||
spread out regions and tick them all in parallel on a configurable sized
|
||||
threadpool. Thus, Folia should scale well for servers like this.
|
||||
|
||||
```
|
||||
Paperweight tasks
|
||||
-----------------
|
||||
applyApiPatches
|
||||
applyPatches
|
||||
applyServerPatches
|
||||
cleanCache - Delete the project setup cache and task outputs.
|
||||
createMojmapBundlerJar - Build a runnable bundler jar
|
||||
createMojmapPaperclipJar - Build a runnable paperclip jar
|
||||
createReobfBundlerJar - Build a runnable bundler jar
|
||||
createReobfPaperclipJar - Build a runnable paperclip jar
|
||||
generateDevelopmentBundle
|
||||
rebuildApiPatches
|
||||
rebuildPatches
|
||||
rebuildServerPatches
|
||||
reobfJar - Re-obfuscate the built jar to obf mappings
|
||||
runDev - Spin up a non-relocated Mojang-mapped test server
|
||||
runReobf - Spin up a test server from the reobfJar output jar
|
||||
runShadow - Spin up a test server from the shadowJar archiveFile
|
||||
```
|
||||
Folia is also its own project, this will not be merged into Paper
|
||||
for the foreseeable future.
|
||||
|
||||
## Branches
|
||||
## Plugin compatibility
|
||||
|
||||
Each branch of this project represents an example:
|
||||
There is no more main thread. I expect _every_ single plugin
|
||||
that exists to require _some_ level of modification to function
|
||||
in Folia. Additionally, multithreading of _any kind_ introduces
|
||||
possible race conditions in plugin held data - so, there are bound
|
||||
to be changes that need to be made.
|
||||
|
||||
- [`main` is the standard example](https://github.com/PaperMC/paperweight-examples/tree/main)
|
||||
- [`submodules` shows how paperweight can be applied on a fork using the more traditional git submodule system](https://github.com/PaperMC/paperweight-examples/tree/submodules)
|
||||
- [`mojangapi` shows how a fork could patch arbitrary non-git directories (such as `Paper-MojangAPI`)](https://github.com/PaperMC/paperweight-examples/tree/mojangapi)
|
||||
- [`submodules-mojang` shows the same as `mojangapi`, but on the git submodules setup from `submodules`](https://github.com/PaperMC/paperweight-examples/tree/submodules-mojangapi)
|
||||
So, have your expectations for compatibility at 0.
|
||||
|
||||
## API plans
|
||||
|
||||
Currently, there is a lot of API that relies on the main thread.
|
||||
I expect basically zero plugins that are compatible with Paper to
|
||||
be compatible with Folia. However, there are plans to add API that
|
||||
would allow Folia plugins to be compatible with Paper.
|
||||
|
||||
For example, the Bukkit Scheduler. The Bukkit Scheduler inherently
|
||||
relies on a single main thread. Folia's RegionisedScheduler and Folia's
|
||||
EntityScheduler allow scheduling of tasks to the "next tick" of whatever
|
||||
region "owns" either a location or an entity. These could be implemented
|
||||
on regular Paper, except they schedule to the main thread - in both cases,
|
||||
the execution of the task will occur on the thread that "owns" the
|
||||
location or entity. This concept applies in general, as the current Paper
|
||||
(single threaded) can be viewed as one giant "region" that encompasses
|
||||
all chunks in all worlds.
|
||||
|
||||
It is not yet decided whether to add this API to Paper itself directly
|
||||
or to Paperlib.
|
||||
|
||||
### The new rules
|
||||
|
||||
The other important rule is that the regions tick in _parallel_, and not
|
||||
_concurrently_. They do not share data, they do not expect to share data,
|
||||
and sharing of data _will_ cause data corruption.
|
||||
Code that is running in one region under no circumstance can
|
||||
be accessing or modifying data that is in another region. Just
|
||||
because multithreading is in the name, it doesn't mean that everything
|
||||
is now thread-safe. In fact, there are only a _few_ things that were
|
||||
made thread-safe to make this happen. As time goes on, the number
|
||||
of thread context checks will only grow, even _if_ it comes at a
|
||||
performance penalty - _nobody_ is going to use or develop for a
|
||||
server platform that is buggy as hell, and the only way to
|
||||
prevent and find these bugs is to make bad accesses fail _hard_ at the
|
||||
source of the bad access.
|
||||
|
||||
This means that Folia compatible plugins need to take advantage of
|
||||
API like the RegionisedScheduler and the EntityScheduler to ensure
|
||||
their code is running on the correct thread context.
|
||||
|
||||
In general, it is safe to assume that a region owns chunk data
|
||||
in an approximate 8 chunks from the source of an event (i.e player
|
||||
breaks block, can probably access 8 chunks around that block). But,
|
||||
this is not guaranteed - plugins should take advantage of upcoming
|
||||
thread-check API to ensure correct behavior.
|
||||
|
||||
The only guarantee of thread-safety comes from the fact that a
|
||||
single region owns data in certain chunks - and if that region is
|
||||
ticking, then it has full access to that data. This data is
|
||||
specifically entity/chunk/poi data, and is entirely unrelated
|
||||
to **ANY** plugin data.
|
||||
|
||||
Normal multithreading rules apply to data that plugins store/access
|
||||
their own data or another plugin's - events/commands/etc are called
|
||||
in _parallel_ because regions are ticking in _parallel_ (we CANNOT
|
||||
call them in a synchronous fashion, as this opens up deadlock issues
|
||||
and would handicap performance). There are no easy ways out of this,
|
||||
it depends solely on what data is being accessed. Sometimes a
|
||||
concurrent collection (like ConcurrentHashMap) is enough, and often a
|
||||
concurrent collection used carelessly will only _hide_ threading
|
||||
issues, which then become near impossible to debug.
|
||||
|
||||
### Current API additions
|
||||
|
||||
- RegionisedScheduler and EntityScheduler acting as a replacement for
|
||||
the BukkitScheduler, however they are not yet fully featured.
|
||||
|
||||
### Current broken API
|
||||
|
||||
- Most API that interacts with portals / respawning players / some
|
||||
player login API is broken.
|
||||
- ALL scoreboard API is considered broken (this is global state that
|
||||
I've not figured out how to properly implement yet)
|
||||
- World loading/unloading
|
||||
- Entity#teleport. This will NEVER UNDER ANY CIRCUMSTANCE come back,
|
||||
use teleportAsync
|
||||
- Could be more
|
||||
|
||||
### Planned API additions
|
||||
|
||||
- Proper asynchronous events. This would allow the result of an event
|
||||
to be completed later, on a different thread context. This is required
|
||||
to implement some things like spawn position select, as asynchronous
|
||||
chunk loads are required when accessing chunk data out-of-region.
|
||||
- World loading/unloading
|
||||
- TickThread#isTickThread overloads to API
|
||||
- More to come here
|
||||
|
||||
### Planned API changes
|
||||
|
||||
- Super aggressive thread checks across the board. This is absolutely
|
||||
required to prevent plugin devs from shipping code that may randomly
|
||||
break random parts of the server in entirely _undiagnosable_ manners.
|
||||
- More to come here
|
285
REGION_LOGIC.md
Normal file
285
REGION_LOGIC.md
Normal file
@ -0,0 +1,285 @@
|
||||
## Fundamental regionising logic
|
||||
|
||||
## Region
|
||||
|
||||
A region is simply a set of owned chunk positions and implementation
|
||||
defined unique data object tied to that region. It is important
|
||||
to note that for any non-dead region x, that for each chunk position y
|
||||
it owns that there is no other non-dead region z such that
|
||||
the region z owns the chunk position y.
|
||||
|
||||
## Regioniser
|
||||
|
||||
Each world has its own regioniser. The regioniser is a term used
|
||||
to describe the logic that the class "ThreadedRegioniser" executes
|
||||
to create, maintain, and destroy regions. Maintenance of regions is
|
||||
done by merging nearby regions together, marking which regions
|
||||
are eligible to be ticked, and finally by splitting any regions
|
||||
into smaller independent regions. Effectively, it is the logic
|
||||
performed to ensure that groups of nearby chunks are considered
|
||||
a single independent region.
|
||||
|
||||
## Guarantees the regioniser provides
|
||||
|
||||
The regioniser provides a set of important invariants that allows
|
||||
regions to tick in parallel without race condtions:
|
||||
|
||||
### First invariant
|
||||
|
||||
The first invariant is simply that any chunk holder that exists
|
||||
has one, and only one, corresponding region.
|
||||
|
||||
### Second invariant
|
||||
|
||||
The second invariant is that for every _existing_ chunk holder x that is
|
||||
contained in a region that every each chunk position within the
|
||||
"merge radius" of x is owned by the region. Effectively, this invariant
|
||||
guarantees that the region is not close to another region, which allows
|
||||
the region to assume while ticking it can create data for chunk holders
|
||||
"close" to it.
|
||||
|
||||
### Third invariant
|
||||
|
||||
The third invariant is that a ticking region _cannot_ expand
|
||||
the chunk positions it owns as it ticks. The third invariant
|
||||
is important as it prevents ticking regions from "fighting"
|
||||
over non-owned nearby chunks, to ensure that they truly tick
|
||||
in parallel, no matter what chunk loads they may issue while
|
||||
ticking.
|
||||
|
||||
To comply with the first invariant, the regioniser will
|
||||
create "transient" regions _around_ ticking regions. Specifically,
|
||||
around in this context means close enough that would require a merge,
|
||||
but not far enough to be considered independent. The transient regions
|
||||
created in these cases will be merged into the ticking region
|
||||
when the ticking region finishes ticking.
|
||||
|
||||
Both of the second invariant and third invariant combined allow
|
||||
the regioniser to guarantee that a ticking region may create
|
||||
and then access chunk holders around it (i.e sync loading) without
|
||||
the possibility that it steps on another region's toes.
|
||||
|
||||
### Fourth invariant
|
||||
|
||||
The fourth invariant is that a region is only in one of four
|
||||
states: "transient", "ready", "ticking", or "dead."
|
||||
|
||||
The "ready" state allows a state to transition to the "ticking" state,
|
||||
while the "transient" state is used as a state for a region that may
|
||||
not tick. The "dead" state is used to mark regions which should
|
||||
not be use.
|
||||
|
||||
The states transistions are explained later, as it ties in
|
||||
with the regioniser's merge and split logic.
|
||||
|
||||
## Regioniser implementation
|
||||
|
||||
The regioniser implementation is a description of how
|
||||
the class "ThreadedRegioniser" adheres to the four invariants
|
||||
described previously.
|
||||
|
||||
### Splitting the world into sections
|
||||
|
||||
The regioniser does not operate on chunk coordinates, but rather
|
||||
on "region section coordinates." Region section coordinates simply
|
||||
represent a grouping of NxN chunks on a grid, where N is some power
|
||||
of two. The actual number is left ambiguous, as region section coordinates
|
||||
are only an internal detail of how chunks are grouped.
|
||||
For example, with N=16 the region section (0,0) encompasses all
|
||||
chunks x in [0,15] and z in [0,15]. This concept is similar to how
|
||||
the chunk coordinate (0,0) encompasses all blocks x in [0, 15]
|
||||
and z in [0, 15]. Another example with N=16, the chunk (17, -5) is
|
||||
contained within region section (1, -1).
|
||||
|
||||
Region section coordinates are used only as a performance
|
||||
tradeoff in the regioniser, as by approximating chunks to their
|
||||
region coordinate allows it to treat NxN chunks as a single
|
||||
unit for regionising. This means that regions do not own chunks positions,
|
||||
but rather own region section positions. The grouping of NxN chunks
|
||||
allows the regionising logic to be performed only on
|
||||
the creation/destruction of region sections.
|
||||
For example with N=16 this means up to NxN-1=255 possible
|
||||
less operations in areas such as addChunk/region recalculation
|
||||
assuming region sections are always full.
|
||||
|
||||
### Implementation variables
|
||||
|
||||
The implemnetation variables control how aggressively the
|
||||
regioniser will maintain regions and merge regions.
|
||||
|
||||
#### Recalculation count
|
||||
|
||||
The recalculation count is the minimum number of region sections
|
||||
that a region must own to allow it to re-calculate. Note that
|
||||
a recalculation operation simply calculates the set of independent
|
||||
regions that exist within a region to check if a split can be
|
||||
performed.
|
||||
This is a simple performance knob that allows split logic to be
|
||||
turned off for small regions, as it is unlikely that small regions
|
||||
can be split in the first place.
|
||||
|
||||
#### Max dead section percent
|
||||
|
||||
The max dead section percent is the minimum percent of dead
|
||||
sections in a region that must exist before a region can run
|
||||
re-calculation logic.
|
||||
|
||||
#### Empty section creation radius
|
||||
|
||||
The empty section creation radius variable is used to determine
|
||||
how many empty region sections are to exist around _any_
|
||||
region section with at least one chunk.
|
||||
|
||||
Internally, the regioniser enforces the third invariant by
|
||||
preventing ticking regions from owning new region sections.
|
||||
The creation of empty sections around any non-empty section will
|
||||
then enforce the second invariant.
|
||||
|
||||
#### Region section merge radius
|
||||
|
||||
The merge radius variable is used to ensure that for any
|
||||
existing region section x that for any other region section y within
|
||||
the merge radius are either owned by region that owns x
|
||||
or are pending a merge into the region that owns x or that the
|
||||
region that owns x is pending a merge into the region that owns y.
|
||||
|
||||
#### Region section chunk shift
|
||||
|
||||
The region section chunk shift is simply log2(grid size N). Thus,
|
||||
N = 1 << region section chunk shift. The conversion from
|
||||
chunk position to region section is additionally defined as
|
||||
region coordinate = chunk coordinate >> region section chunk shift.
|
||||
|
||||
### Operation
|
||||
|
||||
The regioniser is operated by invoking ThreadedRegioniser#addChunk(x, z)
|
||||
or ThreadedRegioniser#removeChunk(x, z) when a chunk holder is created
|
||||
or destroyed.
|
||||
|
||||
Additionally, ThreadedRegion#tryMarkTicking can be used by a caller
|
||||
that attempts to move a region from the "ready" state to the "ticking"
|
||||
state. It is vital to note that this function will return false if
|
||||
the region is not in the "ready" state, as it is possible
|
||||
that even a region considered to be "ready" in the past (i.e scheduled
|
||||
to tick) may be unexpectedly marked as "transient." Thus, the caller
|
||||
needs to handle such cases. The caller that successfully marks
|
||||
a region as ticking must mark it as non-ticking by using
|
||||
ThreadedRegion#markNotTicking.
|
||||
|
||||
The function ThreadedRegion#markNotTicking returns true if the
|
||||
region was migrated from "ticking" state to "ready" state, and false
|
||||
in all other cases. Effectively, it returns whether the current region
|
||||
may be later ticked again.
|
||||
|
||||
### Region section state
|
||||
|
||||
A region section state is one of "dead" or "alive." A region section
|
||||
may additionally be considered "non-empty" if it contains
|
||||
at least one chunk position, and "empty" otherwise.
|
||||
|
||||
A region section is considered "dead" if and only if the region section
|
||||
is also "empty" and that there exist no other "empty" sections within the
|
||||
empty section creation radius.
|
||||
|
||||
The existence of the dead section state is purely for performance, as it
|
||||
allows the recalculation logic of a region to be delayed until the region
|
||||
contains enough dead sections. However, dead sections are still
|
||||
considered to belong to the region that owns them just as alive sections.
|
||||
|
||||
### Addition of chunks (addChunk)
|
||||
|
||||
The addition of chunks to the regioniser boils down to two cases:
|
||||
|
||||
#### Target region section already exists and is not empty
|
||||
|
||||
In this case, it simply adds the chunk to the section and returns.
|
||||
|
||||
#### Target region section does not exist or is empty
|
||||
|
||||
In this case, the region section will be created if it does not exist.
|
||||
Additionally, the region sections in the "create empty radius" will be
|
||||
created as well.
|
||||
|
||||
Then, any region in the create empty radius + merge radius are collected
|
||||
into a set X. This set represents the regions that need to be merged
|
||||
later to adhere to the second invariant.
|
||||
|
||||
If the set X contains no elements, then a region is created in the ready
|
||||
state to own all of the created sections.
|
||||
|
||||
If the set X contains just 1 region, then no regions need to be merged
|
||||
and no region state is modified, and the sections are added to this
|
||||
1 region.
|
||||
|
||||
Merge logic needs to occur when there are more than 1 region in the
|
||||
set X. From the set X, a region x is selected that is not ticking. If
|
||||
no such x exists, then a region x is created. Every region section
|
||||
created is added to the set x, as it is the section that is known
|
||||
to not be ticking - this is done to adhere to invariant third invariant.
|
||||
|
||||
Every region y in the set X that is not x is merged into x if
|
||||
y is not in the ticking state, otherwise x runs the merge later
|
||||
logic into y.
|
||||
|
||||
### Merge later logic
|
||||
|
||||
A merge later operation may only take place from
|
||||
a non-ticking, non-dead region x into a ticking region y.
|
||||
The merge later logic relies on maintaining a set of regions
|
||||
to merge into later per region, and another set of regions
|
||||
that are expected to merge into this region.
|
||||
Effectively, a merge into later operation from x into y will add y into x's
|
||||
merge into later set, and add x into y's expecting merge from set.
|
||||
|
||||
When the ticking region finishes ticking, the ticking region
|
||||
will perform the merge logic for all expecting merges.
|
||||
|
||||
### Merge logic
|
||||
|
||||
A merge operation may only take place between a dead region x
|
||||
and another region y which may be either "transient"
|
||||
or "ready." The region x is effectively absorbed into the
|
||||
region y, as the sections in x are moved to the region y.
|
||||
|
||||
The merge into later is also forwarded to the region y,
|
||||
such so that the regions x was to merge into later, y will
|
||||
now merge into later.
|
||||
|
||||
Additionally, if there is implementation specific data
|
||||
on region x, the region callback to merge the data into the
|
||||
region y is invoked.
|
||||
|
||||
The state of the region y may be updated after a merge operation
|
||||
completes. For example, if the region x was "transient", then
|
||||
the region y should be downgraded to transient as well. Specifically,
|
||||
the region y should be marked as transient if region x contained
|
||||
merge later targets that were not y. The downgrading to transient is
|
||||
required to adhere to the second invariant.
|
||||
|
||||
### Removal of chunks (removeChunk)
|
||||
|
||||
Removal of chunks from region sections simple updates
|
||||
the region sections state to "dead" or "alive", as well as the
|
||||
region sections in the empty creation radius. It will not update
|
||||
any region state, and nor will it purge region sections.
|
||||
|
||||
### Region tick start (tryMarkTicking)
|
||||
|
||||
The tick start simply migrates the state to ticking, so that
|
||||
invariants #2 and #3 can be met.
|
||||
|
||||
### Region tick end (markNotTicking)
|
||||
|
||||
At the end of a tick, the region's new state is not immediately known.
|
||||
|
||||
First, tt first must process its pending merges.
|
||||
|
||||
After it processes its pending merges, it must then check if the
|
||||
region is now pending merge into any other region. If it is, then
|
||||
it transitions to the transient state.
|
||||
|
||||
Otherwise, it will process the removal of dead sections and attempt
|
||||
to split into smaller regions. Note that it is guaranteed
|
||||
that if a region can be possibly split, it must remove dead sections,
|
||||
otherwise, this would contradict the rules used to build the region
|
||||
in the first place.
|
@ -6142,19 +6142,14 @@ index 0000000000000000000000000000000000000000..84b4ff07735fb84e28ee8966ffdedb1b
|
||||
+}
|
||||
diff --git a/src/main/java/io/papermc/paper/threadedregions/ThreadedRegioniser.java b/src/main/java/io/papermc/paper/threadedregions/ThreadedRegioniser.java
|
||||
new file mode 100644
|
||||
index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf516e93c20
|
||||
index 0000000000000000000000000000000000000000..3588a0ad7996d77f3e7ee076961e5b1210aa384e
|
||||
--- /dev/null
|
||||
+++ b/src/main/java/io/papermc/paper/threadedregions/ThreadedRegioniser.java
|
||||
@@ -0,0 +1,1187 @@
|
||||
@@ -0,0 +1,1186 @@
|
||||
+package io.papermc.paper.threadedregions;
|
||||
+
|
||||
+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
|
||||
+import ca.spottedleaf.concurrentutil.map.SWMRLong2ObjectHashTable;
|
||||
+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
|
||||
+import com.google.gson.JsonArray;
|
||||
+import com.google.gson.JsonElement;
|
||||
+import com.google.gson.JsonObject;
|
||||
+import com.google.gson.JsonParser;
|
||||
+import io.papermc.paper.util.CoordinateUtils;
|
||||
+import it.unimi.dsi.fastutil.longs.Long2ReferenceOpenHashMap;
|
||||
+import it.unimi.dsi.fastutil.longs.LongArrayList;
|
||||
@ -6164,7 +6159,6 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+import net.minecraft.world.entity.Entity;
|
||||
+import net.minecraft.world.level.ChunkPos;
|
||||
+
|
||||
+import java.io.FileReader;
|
||||
+import java.lang.invoke.VarHandle;
|
||||
+import java.util.ArrayList;
|
||||
+import java.util.Arrays;
|
||||
@ -6195,6 +6189,10 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+ private final MultiThreadedQueue<Operation> ops = new MultiThreadedQueue<>();
|
||||
+ */
|
||||
+
|
||||
+ /*
|
||||
+ * See REGION_LOGIC.md for complete details on what this class is doing
|
||||
+ */
|
||||
+
|
||||
+ public ThreadedRegioniser(final int minSectionRecalcCount, final double maxDeadRegionPercent,
|
||||
+ final int emptySectionCreateRadius, final int regionSectionMergeRadius,
|
||||
+ final int regionSectionChunkShift, final ServerLevel world,
|
||||
@ -6408,7 +6406,7 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+ } else {
|
||||
+ section.addChunk(chunkX, chunkZ);
|
||||
+ }
|
||||
+ // due to the fast check from above, we know the section is empty whether we need to create it
|
||||
+ // due to the fast check from above, we know the section is empty whether we needed to create it or not
|
||||
+
|
||||
+ // enforce the adjacency invariant by creating / updating neighbour sections
|
||||
+ final int createRadius = this.emptySectionCreateRadius;
|
||||
@ -6520,8 +6518,8 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+
|
||||
+ if (delayedTrueMerge && firstUnlockedRegion != null) {
|
||||
+ // we need to retire this region, as it can no longer tick
|
||||
+ if (regionOfInterest.state == ThreadedRegion.STATE_STEADY_STATE) {
|
||||
+ regionOfInterest.state = ThreadedRegion.STATE_NOT_READY;
|
||||
+ if (regionOfInterest.state == ThreadedRegion.STATE_READY) {
|
||||
+ regionOfInterest.state = ThreadedRegion.STATE_TRANSIENT;
|
||||
+ this.callbacks.onRegionInactive(regionOfInterest);
|
||||
+ }
|
||||
+ }
|
||||
@ -6531,7 +6529,7 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+ }
|
||||
+
|
||||
+ if (regionOfInterestAlive) {
|
||||
+ regionOfInterest.state = ThreadedRegion.STATE_STEADY_STATE;
|
||||
+ regionOfInterest.state = ThreadedRegion.STATE_READY;
|
||||
+ if (!regionOfInterest.mergeIntoLater.isEmpty() || !regionOfInterest.expectingMergeFrom.isEmpty()) {
|
||||
+ throw new IllegalStateException("Should not happen on region " + this);
|
||||
+ }
|
||||
@ -6610,7 +6608,7 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+
|
||||
+ if (!region.mergeIntoLater.isEmpty()) {
|
||||
+ // There is another nearby ticking region that we need to merge into
|
||||
+ region.state = ThreadedRegion.STATE_NOT_READY;
|
||||
+ region.state = ThreadedRegion.STATE_TRANSIENT;
|
||||
+ this.callbacks.onRegionInactive(region);
|
||||
+ // return to avoid removing dead sections or splitting, these actions will be performed
|
||||
+ // by the region we merge into
|
||||
@ -6647,7 +6645,8 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+ // if we removed dead sections, we should check if the region can be split into smaller ones
|
||||
+ // otherwise, the region remains alive
|
||||
+ if (!removedDeadSections) {
|
||||
+ region.state = ThreadedRegion.STATE_STEADY_STATE;
|
||||
+ // didn't remove dead sections, don't check for split
|
||||
+ region.state = ThreadedRegion.STATE_READY;
|
||||
+ if (!region.expectingMergeFrom.isEmpty() || !region.mergeIntoLater.isEmpty()) {
|
||||
+ throw new IllegalStateException("Illegal state " + region);
|
||||
+ }
|
||||
@ -6711,7 +6710,7 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+
|
||||
+ if (newRegions.size() == 1) {
|
||||
+ // no need to split anything, we're done here
|
||||
+ region.state = ThreadedRegion.STATE_STEADY_STATE;
|
||||
+ region.state = ThreadedRegion.STATE_READY;
|
||||
+ if (!region.expectingMergeFrom.isEmpty() || !region.mergeIntoLater.isEmpty()) {
|
||||
+ throw new IllegalStateException("Illegal state " + region);
|
||||
+ }
|
||||
@ -6745,7 +6744,7 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+ // only after invoking data callbacks
|
||||
+
|
||||
+ for (final ThreadedRegion<R, S> newRegion : newRegionsSet) {
|
||||
+ newRegion.state = ThreadedRegion.STATE_STEADY_STATE;
|
||||
+ newRegion.state = ThreadedRegion.STATE_READY;
|
||||
+ if (!newRegion.expectingMergeFrom.isEmpty() || !newRegion.mergeIntoLater.isEmpty()) {
|
||||
+ throw new IllegalStateException("Illegal state " + newRegion);
|
||||
+ }
|
||||
@ -6758,8 +6757,8 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+
|
||||
+ private static final AtomicLong REGION_ID_GENERATOR = new AtomicLong();
|
||||
+
|
||||
+ private static final int STATE_NOT_READY = 0;
|
||||
+ private static final int STATE_STEADY_STATE = 1;
|
||||
+ private static final int STATE_TRANSIENT = 0;
|
||||
+ private static final int STATE_READY = 1;
|
||||
+ private static final int STATE_TICKING = 2;
|
||||
+ private static final int STATE_DEAD = 3;
|
||||
+
|
||||
@ -6780,7 +6779,7 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+ public ThreadedRegion(final ThreadedRegioniser<R, S> regioniser) {
|
||||
+ this.regioniser = regioniser;
|
||||
+ this.id = REGION_ID_GENERATOR.getAndIncrement();
|
||||
+ this.state = STATE_NOT_READY;
|
||||
+ this.state = STATE_TRANSIENT;
|
||||
+ this.data = regioniser.callbacks.createNewData(this);
|
||||
+ }
|
||||
+
|
||||
@ -6900,12 +6899,12 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+
|
||||
+ private boolean tryKill() {
|
||||
+ switch (this.state) {
|
||||
+ case STATE_NOT_READY: {
|
||||
+ case STATE_TRANSIENT: {
|
||||
+ this.state = STATE_DEAD;
|
||||
+ this.onRemove(false);
|
||||
+ return true;
|
||||
+ }
|
||||
+ case STATE_STEADY_STATE: {
|
||||
+ case STATE_READY: {
|
||||
+ this.state = STATE_DEAD;
|
||||
+ this.onRemove(true);
|
||||
+ return true;
|
||||
@ -6955,12 +6954,12 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+ public boolean tryMarkTicking() {
|
||||
+ this.regioniser.acquireWriteLock();
|
||||
+ try {
|
||||
+ if (this.state != STATE_STEADY_STATE) {
|
||||
+ if (this.state != STATE_READY) {
|
||||
+ return false;
|
||||
+ }
|
||||
+
|
||||
+ if (!this.mergeIntoLater.isEmpty() || !this.expectingMergeFrom.isEmpty()) {
|
||||
+ throw new IllegalStateException("Region " + this + " should not be steady state");
|
||||
+ throw new IllegalStateException("Region " + this + " should not be ready");
|
||||
+ }
|
||||
+
|
||||
+ this.state = STATE_TICKING;
|
||||
@ -6979,7 +6978,7 @@ index 0000000000000000000000000000000000000000..f05546aa9124d4c0e34005f528483bf5
|
||||
+
|
||||
+ this.regioniser.onRegionRelease(this);
|
||||
+
|
||||
+ return this.state == STATE_STEADY_STATE;
|
||||
+ return this.state == STATE_READY;
|
||||
+ } finally {
|
||||
+ this.regioniser.releaseWriteLock();
|
||||
+ }
|
||||
|
Loading…
Reference in New Issue
Block a user