Paper/Spigot-Server-Patches/0493-Optimize-Light-Engine.patch

1414 lines
69 KiB
Diff
Raw Normal View History

Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Aikar <aikar@aikar.co>
Date: Thu, 4 Jun 2020 22:43:29 -0400
Subject: [PATCH] Optimize Light Engine
Massive update to light to improve performance and chunk loading/generation.
1) Massive bit packing/unpacking optimizations and inlining.
A lot of performance has to do with constant packing and unpacking of bits.
We now inline a most bit operations, and re-use base x/y/z bits in many places.
This helps with cpu level processing to just do all the math at once instead
of having to jump in and out of function calls.
This much logic also is likely over the JVM Inline limit for JIT too.
2) Applied a few of JellySquid's Phosphor mod optimizations such as
- ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified
- reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used.
3) Optimize hot path accesses to getting updating chunk to have less branching
4) Optimize getBlock accesses to have less branching, and less unpacking
5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks.
6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full
of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front.
this applies for both urgent and non urgent tasks.
7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency.
8) Fix NPE risk that crashes server in getting nibble data
diff --git a/src/main/java/net/minecraft/server/level/ChunkProviderServer.java b/src/main/java/net/minecraft/server/level/ChunkProviderServer.java
index 0905aa93425a9c85f77ac2f4ffe28ec4ddb9f8b4..3a7b7df4080c04b618dfaf0b47cc54293332c41d 100644
--- a/src/main/java/net/minecraft/server/level/ChunkProviderServer.java
+++ b/src/main/java/net/minecraft/server/level/ChunkProviderServer.java
@@ -1067,7 +1067,7 @@ public class ChunkProviderServer extends IChunkProvider {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
if (ChunkProviderServer.this.tickDistanceManager()) {
return true;
} else {
- ChunkProviderServer.this.lightEngine.queueUpdate();
+ //ChunkProviderServer.this.lightEngine.queueUpdate(); // Paper - not needed
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
return super.executeNext() || execChunkTask; // Paper
}
} finally {
diff --git a/src/main/java/net/minecraft/server/level/LightEngineGraphSection.java b/src/main/java/net/minecraft/server/level/LightEngineGraphSection.java
index c8bb040e7ed848877ec9c2f9b30dcda137cadf35..4ee7070364a8989eece4fa4237b529926821f9c9 100644
--- a/src/main/java/net/minecraft/server/level/LightEngineGraphSection.java
+++ b/src/main/java/net/minecraft/server/level/LightEngineGraphSection.java
@@ -16,14 +16,20 @@ public abstract class LightEngineGraphSection extends LightEngineGraph {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
@Override
protected void a(long i, int j, boolean flag) {
+ // Paper start
+ int x = (int) (i >> 42);
+ int y = (int) (i << 44 >> 44);
+ int z = (int) (i << 22 >> 42);
+ // Paper end
for (int k = -1; k <= 1; ++k) {
for (int l = -1; l <= 1; ++l) {
for (int i1 = -1; i1 <= 1; ++i1) {
- long j1 = SectionPosition.a(i, k, l, i1);
+ if (k == 0 && l == 0 && i1 == 0) continue; // Paper
+ long j1 = (((long) (x + k) & 4194303L) << 42) | (((long) (y + l) & 1048575L)) | (((long) (z + i1) & 4194303L) << 20); // Paper
- if (j1 != i) {
+ //if (j1 != i) { // Paper - checked above
this.b(i, j1, j, flag);
- }
+ //} // Paper
}
}
}
@@ -34,10 +40,15 @@ public abstract class LightEngineGraphSection extends LightEngineGraph {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
protected int a(long i, long j, int k) {
int l = k;
+ // Paper start
+ int x = (int) (i >> 42);
+ int y = (int) (i << 44 >> 44);
+ int z = (int) (i << 22 >> 42);
+ // Paper end
for (int i1 = -1; i1 <= 1; ++i1) {
for (int j1 = -1; j1 <= 1; ++j1) {
for (int k1 = -1; k1 <= 1; ++k1) {
- long l1 = SectionPosition.a(i, i1, j1, k1);
+ long l1 = (((long) (x + i1) & 4194303L) << 42) | (((long) (y + j1) & 1048575L)) | (((long) (z + k1) & 4194303L) << 20); // Paper
if (l1 == i) {
l1 = Long.MAX_VALUE;
diff --git a/src/main/java/net/minecraft/server/level/LightEngineThreaded.java b/src/main/java/net/minecraft/server/level/LightEngineThreaded.java
index e066848127cb9a42e8c39422691cc65132cac6bb..8a70286c017450dcc1926f471673933c493c555c 100644
--- a/src/main/java/net/minecraft/server/level/LightEngineThreaded.java
+++ b/src/main/java/net/minecraft/server/level/LightEngineThreaded.java
@@ -1,6 +1,7 @@
package net.minecraft.server.level;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
import com.mojang.datafixers.util.Pair;
+import it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap; // Paper
import it.unimi.dsi.fastutil.objects.ObjectArrayList;
import it.unimi.dsi.fastutil.objects.ObjectList;
import it.unimi.dsi.fastutil.objects.ObjectListIterator;
@@ -27,15 +28,149 @@ public class LightEngineThreaded extends LightEngine implements AutoCloseable {
private static final Logger LOGGER = LogManager.getLogger();
private final ThreadedMailbox<Runnable> b;
- private final ObjectList<Pair<LightEngineThreaded.Update, Runnable>> c = new ObjectArrayList();
- private final PlayerChunkMap d;
+ // Paper start
+ private static final int MAX_PRIORITIES = PlayerChunkMap.GOLDEN_TICKET + 2;
+
+ private boolean isChunkLightStatus(long pair) {
+ PlayerChunk playerChunk = playerChunkMap.getVisibleChunk(pair);
+ if (playerChunk == null) {
+ return false;
+ }
+ ChunkStatus status = PlayerChunk.getChunkStatus(playerChunk.getTicketLevel());
+ return status != null && status.isAtLeastStatus(ChunkStatus.LIGHT);
+ }
+
+ static class ChunkLightQueue {
+ public boolean shouldFastUpdate;
+ java.util.ArrayDeque<Runnable> pre = new java.util.ArrayDeque<Runnable>();
+ java.util.ArrayDeque<Runnable> post = new java.util.ArrayDeque<Runnable>();
+
+ ChunkLightQueue(long chunk) {}
+ }
+
+ static class PendingLightTask {
+ long chunkId;
+ IntSupplier priority;
+ Runnable pre;
+ Runnable post;
+ boolean fastUpdate;
+
+ public PendingLightTask(long chunkId, IntSupplier priority, Runnable pre, Runnable post, boolean fastUpdate) {
+ this.chunkId = chunkId;
+ this.priority = priority;
+ this.pre = pre;
+ this.post = post;
+ this.fastUpdate = fastUpdate;
+ }
+ }
+
+
+ // Retain the chunks priority level for queued light tasks
+ class LightQueue {
+ private int size = 0;
+ private final Long2ObjectLinkedOpenHashMap<ChunkLightQueue>[] buckets = new Long2ObjectLinkedOpenHashMap[MAX_PRIORITIES];
+ private final java.util.concurrent.ConcurrentLinkedQueue<PendingLightTask> pendingTasks = new java.util.concurrent.ConcurrentLinkedQueue<>();
+ private final java.util.concurrent.ConcurrentLinkedQueue<Runnable> priorityChanges = new java.util.concurrent.ConcurrentLinkedQueue<>();
+
+ private LightQueue() {
+ for (int i = 0; i < buckets.length; i++) {
+ buckets[i] = new Long2ObjectLinkedOpenHashMap<>();
+ }
+ }
+
+ public void changePriority(long pair, int currentPriority, int priority) {
+ this.priorityChanges.add(() -> {
+ ChunkLightQueue remove = this.buckets[currentPriority].remove(pair);
+ if (remove != null) {
+ ChunkLightQueue existing = this.buckets[Math.max(1, priority)].put(pair, remove);
+ if (existing != null) {
+ remove.pre.addAll(existing.pre);
+ remove.post.addAll(existing.post);
+ }
+ }
+ });
+ }
+
+ public final void addChunk(long chunkId, IntSupplier priority, Runnable pre, Runnable post) {
+ pendingTasks.add(new PendingLightTask(chunkId, priority, pre, post, true));
+ queueUpdate();
+ }
+
+ public final void add(long chunkId, IntSupplier priority, LightEngineThreaded.Update type, Runnable run) {
+ pendingTasks.add(new PendingLightTask(chunkId, priority, type == Update.PRE_UPDATE ? run : null, type == Update.POST_UPDATE ? run : null, false));
+ }
+ public final void add(PendingLightTask update) {
+ int priority = update.priority.getAsInt();
+ ChunkLightQueue lightQueue = this.buckets[priority].computeIfAbsent(update.chunkId, ChunkLightQueue::new);
+
+ if (update.pre != null) {
+ this.size++;
+ lightQueue.pre.add(update.pre);
+ }
+ if (update.post != null) {
+ this.size++;
+ lightQueue.post.add(update.post);
+ }
+ if (update.fastUpdate) {
+ lightQueue.shouldFastUpdate = true;
+ }
+ }
+
+ public final boolean isEmpty() {
+ return this.size == 0 && this.pendingTasks.isEmpty();
+ }
+
+ public final int size() {
+ return this.size;
+ }
+
+ public boolean poll(java.util.List<Runnable> pre, java.util.List<Runnable> post) {
+ PendingLightTask pending;
+ while ((pending = pendingTasks.poll()) != null) {
+ add(pending);
+ }
+ Runnable run;
+ while ((run = priorityChanges.poll()) != null) {
+ run.run();
+ }
+ boolean hasWork = false;
+ Long2ObjectLinkedOpenHashMap<ChunkLightQueue>[] buckets = this.buckets;
+ int priority = 0;
+ while (priority < MAX_PRIORITIES && !isEmpty()) {
+ Long2ObjectLinkedOpenHashMap<ChunkLightQueue> bucket = buckets[priority];
+ if (bucket.isEmpty()) {
+ priority++;
+ if (hasWork) {
+ return true;
+ } else {
+ continue;
+ }
+ }
+ ChunkLightQueue queue = bucket.removeFirst();
+ this.size -= queue.pre.size() + queue.post.size();
+ pre.addAll(queue.pre);
+ post.addAll(queue.post);
+ queue.pre.clear();
+ queue.post.clear();
+ hasWork = true;
+ if (queue.shouldFastUpdate) {
+ return true;
+ }
+ }
+ return hasWork;
+ }
+ }
+
+ final LightQueue queue = new LightQueue();
+ // Paper end
+ private final PlayerChunkMap d; private final PlayerChunkMap playerChunkMap; // Paper
private final Mailbox<ChunkTaskQueueSorter.a<Runnable>> e;
private volatile int f = 5;
private final AtomicBoolean g = new AtomicBoolean();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
public LightEngineThreaded(ILightAccess ilightaccess, PlayerChunkMap playerchunkmap, boolean flag, ThreadedMailbox<Runnable> threadedmailbox, Mailbox<ChunkTaskQueueSorter.a<Runnable>> mailbox) {
super(ilightaccess, true, flag);
- this.d = playerchunkmap;
+ this.d = playerchunkmap; this.playerChunkMap = d; // Paper
this.e = mailbox;
this.b = threadedmailbox;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
@@ -122,13 +257,9 @@ public class LightEngineThreaded extends LightEngine implements AutoCloseable {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
private void a(int i, int j, IntSupplier intsupplier, LightEngineThreaded.Update lightenginethreaded_update, Runnable runnable) {
- this.e.a(ChunkTaskQueueSorter.a(() -> {
- this.c.add(Pair.of(lightenginethreaded_update, runnable));
- if (this.c.size() >= this.f) {
- this.b();
- }
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
-
- }, ChunkCoordIntPair.pair(i, j), intsupplier));
+ // Paper start - replace method
+ this.queue.add(ChunkCoordIntPair.pair(i, j), intsupplier, lightenginethreaded_update, runnable);
+ // Paper end
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
@Override
@@ -145,8 +276,19 @@ public class LightEngineThreaded extends LightEngine implements AutoCloseable {
public CompletableFuture<IChunkAccess> a(IChunkAccess ichunkaccess, boolean flag) {
ChunkCoordIntPair chunkcoordintpair = ichunkaccess.getPos();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- ichunkaccess.b(false);
- this.a(chunkcoordintpair.x, chunkcoordintpair.z, LightEngineThreaded.Update.PRE_UPDATE, SystemUtils.a(() -> {
+ // Paper start
+ //ichunkaccess.b(false); // Don't need to disable this
+ long pair = chunkcoordintpair.pair();
+ CompletableFuture<IChunkAccess> future = new CompletableFuture<>();
+ IntSupplier prioritySupplier = playerChunkMap.getPrioritySupplier(pair);
+ boolean[] skippedPre = {false};
+ this.queue.addChunk(pair, prioritySupplier, SystemUtils.a(() -> {
+ if (!isChunkLightStatus(pair)) {
+ future.complete(ichunkaccess);
+ skippedPre[0] = true;
+ return;
+ }
+ // Paper end
ChunkSection[] achunksection = ichunkaccess.getSections();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
for (int i = 0; i < 16; ++i) {
@@ -164,55 +306,48 @@ public class LightEngineThreaded extends LightEngine implements AutoCloseable {
});
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
- this.d.c(chunkcoordintpair);
+ // this.d.c(chunkcoordintpair); // Paper - move into post task below
}, () -> {
return "lightChunk " + chunkcoordintpair + " " + flag;
- }));
- return CompletableFuture.supplyAsync(() -> {
+ // Paper start - merge the 2 together
+ }), () -> {
+ this.d.c(chunkcoordintpair); // Paper - release light tickets as post task to ensure they stay loaded until fully done
+ if (skippedPre[0]) return; // Paper - future's already complete
ichunkaccess.b(true);
super.b(chunkcoordintpair, false);
- return ichunkaccess;
- }, (runnable) -> {
- this.a(chunkcoordintpair.x, chunkcoordintpair.z, LightEngineThreaded.Update.POST_UPDATE, runnable);
+ // Paper start
+ future.complete(ichunkaccess);
});
+ return future;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
public void queueUpdate() {
- if ((!this.c.isEmpty() || super.a()) && this.g.compareAndSet(false, true)) {
+ if ((!this.queue.isEmpty() || super.a()) && this.g.compareAndSet(false, true)) { // Paper
this.b.a((() -> { // Paper - decompile error
this.b();
this.g.set(false);
+ queueUpdate(); // Paper - if we still have work to do, do it!
}));
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper start - replace impl
+ private final java.util.List<Runnable> pre = new java.util.ArrayList<>();
+ private final java.util.List<Runnable> post = new java.util.ArrayList<>();
private void b() {
- int i = Math.min(this.c.size(), this.f);
- ObjectListIterator<Pair<LightEngineThreaded.Update, Runnable>> objectlistiterator = this.c.iterator();
-
- Pair pair;
- int j;
-
- for (j = 0; objectlistiterator.hasNext() && j < i; ++j) {
- pair = (Pair) objectlistiterator.next();
- if (pair.getFirst() == LightEngineThreaded.Update.PRE_UPDATE) {
- ((Runnable) pair.getSecond()).run();
- }
+ if (queue.poll(pre, post)) {
+ pre.forEach(Runnable::run);
+ pre.clear();
+ super.a(Integer.MAX_VALUE, true, true);
+ post.forEach(Runnable::run);
+ post.clear();
+ } else {
+ // might have level updates to go still
+ super.a(Integer.MAX_VALUE, true, true);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
-
- objectlistiterator.back(j);
- super.a(Integer.MAX_VALUE, true, true);
-
- for (j = 0; objectlistiterator.hasNext() && j < i; ++j) {
- pair = (Pair) objectlistiterator.next();
- if (pair.getFirst() == LightEngineThreaded.Update.POST_UPDATE) {
- ((Runnable) pair.getSecond()).run();
- }
-
- objectlistiterator.remove();
- }
-
+ // Paper end
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
public void a(int i) {
diff --git a/src/main/java/net/minecraft/server/level/PlayerChunk.java b/src/main/java/net/minecraft/server/level/PlayerChunk.java
index 48b87ed945a71b97d8a88cbaf4099dc966a8ab88..b19c3bdbf64dc44f01487a3d17236a03a95708b5 100644
--- a/src/main/java/net/minecraft/server/level/PlayerChunk.java
+++ b/src/main/java/net/minecraft/server/level/PlayerChunk.java
@@ -753,6 +753,7 @@ public class PlayerChunk {
ioPriority = com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGH_PRIORITY;
}
chunkMap.world.asyncChunkTaskManager.raisePriority(location.x, location.z, ioPriority);
+ chunkMap.world.getChunkProvider().getLightEngine().queue.changePriority(location.pair(), getCurrentPriority(), priority);
}
if (getCurrentPriority() != priority) {
this.u.a(this.location, this::getCurrentPriority, priority, this::setPriority); // use preferred priority
diff --git a/src/main/java/net/minecraft/server/level/PlayerChunkMap.java b/src/main/java/net/minecraft/server/level/PlayerChunkMap.java
index 4441af38184adefdf686429d1d147b9bbb76b5a6..f3f906d8ffa195b100d5bd2d232ae8489d48d60d 100644
--- a/src/main/java/net/minecraft/server/level/PlayerChunkMap.java
+++ b/src/main/java/net/minecraft/server/level/PlayerChunkMap.java
@@ -322,6 +322,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
}
// Paper end
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ private final java.util.concurrent.ExecutorService lightThread;
public PlayerChunkMap(WorldServer worldserver, Convertable.ConversionSession convertable_conversionsession, DataFixer datafixer, DefinedStructureManager definedstructuremanager, Executor executor, IAsyncTaskHandler<Runnable> iasynctaskhandler, ILightAccess ilightaccess, ChunkGenerator chunkgenerator, WorldLoadListener worldloadlistener, Supplier<WorldPersistentData> supplier, int i, boolean flag) {
super(new File(convertable_conversionsession.a(worldserver.getDimensionKey()), "region"), datafixer, flag);
//this.visibleChunks = this.updatingChunks.clone(); // Paper - no more cloning
@@ -353,7 +354,15 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
Mailbox<Runnable> mailbox = Mailbox.a("main", iasynctaskhandler::a);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
this.worldLoadListener = worldloadlistener;
- ThreadedMailbox<Runnable> lightthreaded; ThreadedMailbox<Runnable> threadedmailbox1 = lightthreaded = ThreadedMailbox.a(executor, "light"); // Paper
+ // Paper start - use light thread
+ ThreadedMailbox<Runnable> lightthreaded; ThreadedMailbox<Runnable> threadedmailbox1 = lightthreaded = ThreadedMailbox.a(lightThread = java.util.concurrent.Executors.newSingleThreadExecutor(r -> {
+ Thread thread = new Thread(r);
+ thread.setName(((WorldDataServer)world.getWorldData()).getName() + " - Light");
+ thread.setDaemon(true);
+ thread.setPriority(Thread.NORM_PRIORITY+1);
+ return thread;
+ }), "light");
+ // Paper end
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
this.p = new ChunkTaskQueueSorter(ImmutableList.of(threadedmailbox, mailbox, threadedmailbox1), executor, Integer.MAX_VALUE);
this.mailboxWorldGen = this.p.a(threadedmailbox, false);
@@ -699,6 +708,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
// Paper end
}
+ protected final IntSupplier getPrioritySupplier(long i) { return c(i); } // Paper - OBFHELPER
protected IntSupplier c(long i) {
return () -> {
PlayerChunk playerchunk = this.getVisibleChunk(i);
@@ -826,6 +836,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
@Override
public void close() throws IOException {
try {
+ this.lightThread.shutdown(); // Paper
this.p.close();
this.world.asyncChunkTaskManager.close(true); // Paper - Required since we're closing regionfiles in the next line
this.m.close();
diff --git a/src/main/java/net/minecraft/server/level/WorldServer.java b/src/main/java/net/minecraft/server/level/WorldServer.java
index 117772867eede63646d80bb7af4792a2db16d815..2181fbc31e96c74679185a49abbb3d7a2d5b6e9f 100644
--- a/src/main/java/net/minecraft/server/level/WorldServer.java
+++ b/src/main/java/net/minecraft/server/level/WorldServer.java
@@ -823,6 +823,7 @@ public class WorldServer extends World implements GeneratorAccessSeed {
}
gameprofilerfiller.exit();
timings.chunkTicksBlocks.stopTiming(); // Paper
+ getChunkProvider().getLightEngine().queueUpdate(); // Paper
// Paper end
}
}
diff --git a/src/main/java/net/minecraft/util/thread/ThreadedMailbox.java b/src/main/java/net/minecraft/util/thread/ThreadedMailbox.java
index 2efca1fe92b2e93dcbf5337eea8855b1b2b9a564..72bfda620f073fd3c3e4c43d78583386dadf95e6 100644
--- a/src/main/java/net/minecraft/util/thread/ThreadedMailbox.java
+++ b/src/main/java/net/minecraft/util/thread/ThreadedMailbox.java
@@ -110,7 +110,8 @@ public class ThreadedMailbox<T> implements Mailbox<T>, AutoCloseable, Runnable {
}
- @Override
+
+ public final void queue(T t0) { a(t0); } @Override // Paper - OBFHELPER
public void a(T t0) {
this.a.a(t0);
this.f();
diff --git a/src/main/java/net/minecraft/world/level/chunk/NibbleArray.java b/src/main/java/net/minecraft/world/level/chunk/NibbleArray.java
index 50fc6316bfd42a532f792ba7557783eea3988a55..c01284e6221b69f8fac3c74a7f9726a188f1f843 100644
--- a/src/main/java/net/minecraft/world/level/chunk/NibbleArray.java
+++ b/src/main/java/net/minecraft/world/level/chunk/NibbleArray.java
@@ -9,6 +9,13 @@ import net.minecraft.SystemUtils;
public class NibbleArray {
// Paper start
+ static final NibbleArray EMPTY_NIBBLE_ARRAY = new NibbleArray() {
+ @Override
+ public byte[] asBytes() {
+ throw new IllegalStateException();
+ }
+ };
+ long lightCacheKey = Long.MIN_VALUE;
public static byte[] EMPTY_NIBBLE = new byte[2048];
private static final int nibbleBucketSizeMultiplier = Integer.getInteger("Paper.nibbleBucketSize", 3072);
private static final int maxPoolSize = Integer.getInteger("Paper.maxNibblePoolSize", (int) Math.min(6, Math.max(1, Runtime.getRuntime().maxMemory() / 1024 / 1024 / 1024)) * (nibbleBucketSizeMultiplier * 8));
diff --git a/src/main/java/net/minecraft/world/level/lighting/LightEngineBlock.java b/src/main/java/net/minecraft/world/level/lighting/LightEngineBlock.java
index f6198069e3ca421b4f551939263c7cf8bd5b754e..e8d2415033c5db3c27e10a38e1ea75e60ebd693e 100644
--- a/src/main/java/net/minecraft/world/level/lighting/LightEngineBlock.java
+++ b/src/main/java/net/minecraft/world/level/lighting/LightEngineBlock.java
@@ -23,9 +23,11 @@ public final class LightEngineBlock extends LightEngineLayer<LightEngineStorageB
}
private int d(long i) {
- int j = BlockPosition.b(i);
- int k = BlockPosition.c(i);
- int l = BlockPosition.d(i);
+ // Paper start - inline math
+ int j = (int) (i >> 38);
+ int k = (int) ((i << 52) >> 52);
+ int l = (int) ((i << 26) >> 38);
+ // Paper end
IBlockAccess iblockaccess = this.a.c(j >> 4, l >> 4);
return iblockaccess != null ? iblockaccess.g(this.f.d(j, k, l)) : 0;
@@ -40,25 +42,33 @@ public final class LightEngineBlock extends LightEngineLayer<LightEngineStorageB
} else if (k >= 15) {
return k;
} else {
- int l = Integer.signum(BlockPosition.b(j) - BlockPosition.b(i));
- int i1 = Integer.signum(BlockPosition.c(j) - BlockPosition.c(i));
- int j1 = Integer.signum(BlockPosition.d(j) - BlockPosition.d(i));
+ // Paper start - reuse math - credit to JellySquid for idea
+ int jx = (int) (j >> 38);
+ int jy = (int) ((j << 52) >> 52);
+ int jz = (int) ((j << 26) >> 38);
+ int ix = (int) (i >> 38);
+ int iy = (int) ((i << 52) >> 52);
+ int iz = (int) ((i << 26) >> 38);
+ int l = Integer.signum(jx - ix);
+ int i1 = Integer.signum(jy - iy);
+ int j1 = Integer.signum(jz - iz);
+ // Paper end
EnumDirection enumdirection = EnumDirection.a(l, i1, j1);
if (enumdirection == null) {
return 15;
} else {
//MutableInt mutableint = new MutableInt(); // Paper - share mutableint, single threaded
- IBlockData iblockdata = this.a(j, mutableint);
-
- if (mutableint.getValue() >= 15) {
+ IBlockData iblockdata = this.getBlockOptimized(jx, jy, jz, mutableint); // Paper
+ int blockedLight = mutableint.getValue(); // Paper
+ if (blockedLight >= 15) { // Paper
return 15;
} else {
- IBlockData iblockdata1 = this.a(i, (MutableInt) null);
+ IBlockData iblockdata1 = this.getBlockOptimized(ix, iy, iz); // Paper
VoxelShape voxelshape = this.a(iblockdata1, i, enumdirection);
VoxelShape voxelshape1 = this.a(iblockdata, j, enumdirection.opposite());
- return VoxelShapes.b(voxelshape, voxelshape1) ? 15 : k + Math.max(1, mutableint.getValue());
+ return VoxelShapes.b(voxelshape, voxelshape1) ? 15 : k + Math.max(1, blockedLight); // Paper
}
}
}
@@ -66,14 +76,19 @@ public final class LightEngineBlock extends LightEngineLayer<LightEngineStorageB
@Override
protected void a(long i, int j, boolean flag) {
- long k = SectionPosition.e(i);
+ // Paper start - reuse unpacking, credit to JellySquid (Didn't do full optimization though)
+ int x = (int) (i >> 38);
+ int y = (int) ((i << 52) >> 52);
+ int z = (int) ((i << 26) >> 38);
+ long k = SectionPosition.blockPosAsSectionLong(x, y, z);
+ // Paper end
EnumDirection[] aenumdirection = LightEngineBlock.e;
int l = aenumdirection.length;
for (int i1 = 0; i1 < l; ++i1) {
EnumDirection enumdirection = aenumdirection[i1];
- long j1 = BlockPosition.a(i, enumdirection);
- long k1 = SectionPosition.e(j1);
+ long j1 = BlockPosition.getAdjacent(x, y, z, enumdirection); // Paper
+ long k1 = SectionPosition.getAdjacentFromBlockPos(x, y, z, enumdirection); // Paper
if (k == k1 || ((LightEngineStorageBlock) this.c).g(k1)) {
this.b(i, j1, j, flag);
@@ -98,27 +113,37 @@ public final class LightEngineBlock extends LightEngineLayer<LightEngineStorageB
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
}
- long j1 = SectionPosition.e(i);
- NibbleArray nibblearray = ((LightEngineStorageBlock) this.c).a(j1, true);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long j1 = SectionPosition.blockPosAsSectionLong(baseX, baseY, baseZ);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ NibbleArray nibblearray = this.c.updating.getUpdatingOptimized(j1);
+ // Paper end
EnumDirection[] aenumdirection = LightEngineBlock.e;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
int k1 = aenumdirection.length;
for (int l1 = 0; l1 < k1; ++l1) {
EnumDirection enumdirection = aenumdirection[l1];
- long i2 = BlockPosition.a(i, enumdirection);
+ // Paper start
+ int newX = baseX + enumdirection.getAdjacentX();
+ int newY = baseY + enumdirection.getAdjacentY();
+ int newZ = baseZ + enumdirection.getAdjacentZ();
+ long i2 = BlockPosition.asLong(newX, newY, newZ);
if (i2 != j) {
- long j2 = SectionPosition.e(i2);
+ long j2 = SectionPosition.blockPosAsSectionLong(newX, newY, newZ);
+ // Paper end
NibbleArray nibblearray1;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
if (j1 == j2) {
nibblearray1 = nibblearray;
} else {
- nibblearray1 = ((LightEngineStorageBlock) this.c).a(j2, true);
+ nibblearray1 = ((LightEngineStorageBlock) this.c).updating.getUpdatingOptimized(j2); // Paper
}
if (nibblearray1 != null) {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- int k2 = this.b(i2, i, this.a(nibblearray1, i2));
+ int k2 = this.b(i2, i, this.getNibbleLightInverse(nibblearray1, newX, newY, newZ)); // Paper
if (l > k2) {
l = k2;
diff --git a/src/main/java/net/minecraft/world/level/lighting/LightEngineLayer.java b/src/main/java/net/minecraft/world/level/lighting/LightEngineLayer.java
index 944a8c295ff9df0d96800ddc4f6763598cf61d0d..896020d7d680a2456c190cc2463248a908046ddc 100644
--- a/src/main/java/net/minecraft/world/level/lighting/LightEngineLayer.java
+++ b/src/main/java/net/minecraft/world/level/lighting/LightEngineLayer.java
@@ -23,10 +23,37 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
protected final EnumSkyBlock b;
protected final S c;
private boolean f;
- protected final BlockPosition.MutableBlockPosition d = new BlockPosition.MutableBlockPosition();
+ protected final BlockPosition.MutableBlockPosition d = new BlockPosition.MutableBlockPosition(); protected final BlockPosition.MutableBlockPosition pos = d; // Paper
private final long[] g = new long[2];
- private final IBlockAccess[] h = new IBlockAccess[2];
+ private final IChunkAccess[] h = new IChunkAccess[2]; // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper start - see fully commented out method below (look for Bedrock)
+ // optimized method with less branching for when scenarios arent needed.
+ // avoid using mutable version if can
+ protected final IBlockData getBlockOptimized(int x, int y, int z, MutableInt mutableint) {
+ IChunkAccess iblockaccess = this.a(x >> 4, z >> 4);
+
+ if (iblockaccess == null) {
+ mutableint.setValue(16);
+ return Blocks.BEDROCK.getBlockData();
+ } else {
+ this.pos.setValues(x, y, z);
+ IBlockData iblockdata = iblockaccess.getType(x, y, z);
+ mutableint.setValue(iblockdata.b(this.a.getWorld(), this.pos));
+ return iblockdata.l() && iblockdata.e() ? iblockdata : Blocks.AIR.getBlockData();
+ }
+ }
+ protected final IBlockData getBlockOptimized(int x, int y, int z) {
+ IChunkAccess iblockaccess = this.a(x >> 4, z >> 4);
+
+ if (iblockaccess == null) {
+ return Blocks.BEDROCK.getBlockData();
+ } else {
+ IBlockData iblockdata = iblockaccess.getType(x, y, z);
+ return iblockdata.l() && iblockdata.e() ? iblockdata : Blocks.AIR.getBlockData();
+ }
+ }
+ // Paper end
public LightEngineLayer(ILightAccess ilightaccess, EnumSkyBlock enumskyblock, S s0) {
super(16, 256, 8192);
this.a = ilightaccess;
@@ -45,7 +72,7 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
@Nullable
- private IBlockAccess a(int i, int j) {
+ private IChunkAccess a(int i, int j) { // Paper
long k = ChunkCoordIntPair.pair(i, j);
for (int l = 0; l < 2; ++l) {
@@ -54,7 +81,7 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
}
- IBlockAccess iblockaccess = this.a.c(i, j);
+ IChunkAccess iblockaccess = (IChunkAccess) this.a.c(i, j); // Paper
for (int i1 = 1; i1 > 0; --i1) {
this.g[i1] = this.g[i1 - 1];
@@ -71,37 +98,39 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
Arrays.fill(this.h, (Object) null);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
- protected IBlockData a(long i, @Nullable MutableInt mutableint) {
- if (i == Long.MAX_VALUE) {
- if (mutableint != null) {
- mutableint.setValue(0);
- }
-
- return Blocks.AIR.getBlockData();
- } else {
- int j = SectionPosition.a(BlockPosition.b(i));
- int k = SectionPosition.a(BlockPosition.d(i));
- IBlockAccess iblockaccess = this.a(j, k);
-
- if (iblockaccess == null) {
- if (mutableint != null) {
- mutableint.setValue(16);
- }
-
- return Blocks.BEDROCK.getBlockData();
- } else {
- this.d.g(i);
- IBlockData iblockdata = iblockaccess.getType(this.d);
- boolean flag = iblockdata.l() && iblockdata.e();
-
- if (mutableint != null) {
- mutableint.setValue(iblockdata.b(this.a.getWorld(), (BlockPosition) this.d));
- }
-
- return flag ? iblockdata : Blocks.AIR.getBlockData();
- }
- }
- }
+ // Paper start - comment out, see getBlockOptimized
+// protected IBlockData a(long i, @Nullable MutableInt mutableint) {
+// if (i == Long.MAX_VALUE) {
+// if (mutableint != null) {
+// mutableint.setValue(0);
+// }
+//
+// return Blocks.AIR.getBlockData();
+// } else {
+// int j = SectionPosition.a(BlockPosition.b(i));
+// int k = SectionPosition.a(BlockPosition.d(i));
+// IBlockAccess iblockaccess = this.a(j, k);
+//
+// if (iblockaccess == null) {
+// if (mutableint != null) {
+// mutableint.setValue(16);
+// }
+//
+// return Blocks.BEDROCK.getBlockData();
+// } else {
+// this.d.g(i);
+// IBlockData iblockdata = iblockaccess.getType(this.d);
+// boolean flag = iblockdata.l() && iblockdata.e();
+//
+// if (mutableint != null) {
+// mutableint.setValue(iblockdata.b(this.a.getWorld(), (BlockPosition) this.d));
+// }
+//
+// return flag ? iblockdata : Blocks.AIR.getBlockData();
+// }
+// }
+// }
+ // Paper end
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
protected VoxelShape a(IBlockData iblockdata, long i, EnumDirection enumdirection) {
return iblockdata.l() ? iblockdata.a(this.a.getWorld(), this.d.g(i), enumdirection) : VoxelShapes.a();
@@ -136,8 +165,9 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
return i == Long.MAX_VALUE ? 0 : 15 - this.c.i(i);
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ protected int getNibbleLightInverse(NibbleArray nibblearray, int x, int y, int z) { return 15 - nibblearray.a(x & 15, y & 15, z & 15); } // Paper - x/y/z version of below
protected int a(NibbleArray nibblearray, long i) {
- return 15 - nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)));
+ return 15 - nibblearray.a((int) (i >> 38) & 15, (int) ((i << 52) >> 52) & 15, (int) ((i << 26) >> 38) & 15); // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
@Override
diff --git a/src/main/java/net/minecraft/world/level/lighting/LightEngineSky.java b/src/main/java/net/minecraft/world/level/lighting/LightEngineSky.java
index 37fa5faea6e2972e3eb8a3cbd1913ef38dc9456f..fa8d84ce6819b2f41ff7448dec5662ba2df76e77 100644
--- a/src/main/java/net/minecraft/world/level/lighting/LightEngineSky.java
+++ b/src/main/java/net/minecraft/world/level/lighting/LightEngineSky.java
@@ -38,21 +38,25 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
return k;
} else {
//MutableInt mutableint = new MutableInt(); // Paper - share mutableint, single threaded
- IBlockData iblockdata = this.a(j, mutableint);
-
- if (mutableint.getValue() >= 15) {
+ // Paper start - use x/y/z and optimized block lookup
+ int jx = (int) (j >> 38);
+ int jy = (int) ((j << 52) >> 52);
+ int jz = (int) ((j << 26) >> 38);
+ IBlockData iblockdata = this.getBlockOptimized(jx, jy, jz, mutableint);
+ int blockedLight = mutableint.getValue();
+ if (blockedLight >= 15) {
+ // Paper end
return 15;
} else {
- int l = BlockPosition.b(i);
- int i1 = BlockPosition.c(i);
- int j1 = BlockPosition.d(i);
- int k1 = BlockPosition.b(j);
- int l1 = BlockPosition.c(j);
- int i2 = BlockPosition.d(j);
- boolean flag = l == k1 && j1 == i2;
- int j2 = Integer.signum(k1 - l);
- int k2 = Integer.signum(l1 - i1);
- int l2 = Integer.signum(i2 - j1);
+ // Paper start - inline math
+ int ix = (int) (i >> 38);
+ int iy = (int) ((i << 52) >> 52);
+ int iz = (int) ((i << 26) >> 38);
+ boolean flag = ix == jx && iz == jz;
+ int j2 = Integer.signum(jx - ix);
+ int k2 = Integer.signum(jy - iy);
+ int l2 = Integer.signum(jz - iz);
+ // Paper end
EnumDirection enumdirection;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
if (i == Long.MAX_VALUE) {
@@ -61,7 +65,7 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
enumdirection = EnumDirection.a(j2, k2, l2);
}
- IBlockData iblockdata1 = this.a(i, (MutableInt) null);
+ IBlockData iblockdata1 = i == Long.MAX_VALUE ? Blocks.AIR.getBlockData() : this.getBlockOptimized(ix, iy, iz); // Paper
VoxelShape voxelshape;
if (enumdirection != null) {
@@ -91,9 +95,9 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
}
}
- boolean flag1 = i == Long.MAX_VALUE || flag && i1 > l1;
+ boolean flag1 = i == Long.MAX_VALUE || flag && iy > jy; // Paper rename vars to iy > jy
- return flag1 && k == 0 && mutableint.getValue() == 0 ? 0 : k + Math.max(1, mutableint.getValue());
+ return flag1 && k == 0 && blockedLight == 0 ? 0 : k + Math.max(1, blockedLight); // Paper
}
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
@@ -101,10 +105,14 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
@Override
protected void a(long i, int j, boolean flag) {
- long k = SectionPosition.e(i);
- int l = BlockPosition.c(i);
- int i1 = SectionPosition.b(l);
- int j1 = SectionPosition.a(l);
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long k = SectionPosition.blockPosAsSectionLong(baseX, baseY, baseZ);
+ int i1 = baseY & 15;
+ int j1 = baseY >> 4;
+ // Paper end
int k1;
if (i1 != 0) {
@@ -119,15 +127,16 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
k1 = l1;
}
- long i2 = BlockPosition.a(i, 0, -1 - k1 * 16, 0);
- long j2 = SectionPosition.e(i2);
+ int newBaseY = baseY + (-1 - k1 * 16); // Paper
+ long i2 = BlockPosition.asLong(baseX, newBaseY, baseZ); // Paper
+ long j2 = SectionPosition.blockPosAsSectionLong(baseX, newBaseY, baseZ); // Paper
if (k == j2 || ((LightEngineStorageSky) this.c).g(j2)) {
this.b(i, i2, j, flag);
}
- long k2 = BlockPosition.a(i, EnumDirection.UP);
- long l2 = SectionPosition.e(k2);
+ long k2 = BlockPosition.asLong(baseX, baseY + 1, baseZ); // Paper
+ long l2 = SectionPosition.blockPosAsSectionLong(baseX, baseY + 1, baseZ); // Paper
if (k == l2 || ((LightEngineStorageSky) this.c).g(l2)) {
this.b(i, k2, j, flag);
@@ -142,8 +151,8 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
int k3 = 0;
while (true) {
- long l3 = BlockPosition.a(i, enumdirection.getAdjacentX(), -k3, enumdirection.getAdjacentZ());
- long i4 = SectionPosition.e(l3);
+ long l3 = BlockPosition.asLong(baseX + enumdirection.getAdjacentX(), baseY - k3, baseZ + enumdirection.getAdjacentZ()); // Paper
+ long i4 = SectionPosition.blockPosAsSectionLong(baseX + enumdirection.getAdjacentX(), baseY - k3, baseZ + enumdirection.getAdjacentZ()); // Paper
if (k == i4) {
this.b(i, l3, j, flag);
@@ -181,26 +190,36 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
}
}
- long j1 = SectionPosition.e(i);
- NibbleArray nibblearray = ((LightEngineStorageSky) this.c).a(j1, true);
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long j1 = SectionPosition.blockPosAsSectionLong(baseX, baseY, baseZ);
+ NibbleArray nibblearray = this.c.updating.getUpdatingOptimized(j1);
+ // Paper end
EnumDirection[] aenumdirection = LightEngineSky.e;
int k1 = aenumdirection.length;
for (int l1 = 0; l1 < k1; ++l1) {
EnumDirection enumdirection = aenumdirection[l1];
- long i2 = BlockPosition.a(i, enumdirection);
- long j2 = SectionPosition.e(i2);
+ // Paper start
+ int newX = baseX + enumdirection.getAdjacentX();
+ int newY = baseY + enumdirection.getAdjacentY();
+ int newZ = baseZ + enumdirection.getAdjacentZ();
+ long i2 = BlockPosition.asLong(newX, newY, newZ);
+ long j2 = SectionPosition.blockPosAsSectionLong(newX, newY, newZ);
+ // Paper end
NibbleArray nibblearray1;
if (j1 == j2) {
nibblearray1 = nibblearray;
} else {
- nibblearray1 = ((LightEngineStorageSky) this.c).a(j2, true);
+ nibblearray1 = ((LightEngineStorageSky) this.c).updating.getUpdatingOptimized(j2); // Paper
}
if (nibblearray1 != null) {
if (i2 != j) {
- int k2 = this.b(i2, i, this.a(nibblearray1, i2));
+ int k2 = this.b(i2, i, this.getNibbleLightInverse(nibblearray1, newX, newY, newZ)); // Paper
if (l > k2) {
l = k2;
@@ -215,7 +234,7 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
j2 = SectionPosition.a(j2, EnumDirection.UP);
}
- NibbleArray nibblearray2 = ((LightEngineStorageSky) this.c).a(j2, true);
+ NibbleArray nibblearray2 = this.c.updating.getUpdatingOptimized(j2); // Paper
if (i2 != j) {
int l2;
diff --git a/src/main/java/net/minecraft/world/level/lighting/LightEngineStorage.java b/src/main/java/net/minecraft/world/level/lighting/LightEngineStorage.java
index 9ba9efb181b9607f25b7c921e69e4c59b182d429..fc0162e7f543d230277457638f208a66537560d7 100644
--- a/src/main/java/net/minecraft/world/level/lighting/LightEngineStorage.java
+++ b/src/main/java/net/minecraft/world/level/lighting/LightEngineStorage.java
@@ -27,9 +27,9 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
protected final LongSet c = new LongOpenHashSet();
protected final LongSet d = new LongOpenHashSet();
protected volatile M e_visible; protected final Object visibleUpdateLock = new Object(); // Paper - diff on change, should be "visible" - force compile fail on usage change
- protected final M f; // Paper - diff on change, should be "updating"
+ protected final M f; protected final M updating; // Paper - diff on change, should be "updating"
protected final LongSet g = new LongOpenHashSet();
- protected final LongSet h = new LongOpenHashSet();
+ protected final LongSet h = new LongOpenHashSet(); LongSet dirty = h; // Paper - OBFHELPER
protected final Long2ObjectMap<NibbleArray> i = Long2ObjectMaps.synchronize(new Long2ObjectOpenHashMap());
private final LongSet n = new LongOpenHashSet();
private final LongSet o = new LongOpenHashSet();
@@ -37,33 +37,33 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
protected volatile boolean j;
protected LightEngineStorage(EnumSkyBlock enumskyblock, ILightAccess ilightaccess, M m0) {
- super(3, 16, 256);
+ super(3, 256, 256); // Paper - bump expected size of level sets to improve collisions and reduce rehashing (seen a lot of it)
this.l = enumskyblock;
this.m = ilightaccess;
- this.f = m0;
+ this.f = m0; updating = m0; // Paper
this.e_visible = m0.b(); // Paper - avoid copying light data
this.e_visible.d(); // Paper - avoid copying light data
}
- protected boolean g(long i) {
- return this.a(i, true) != null;
+ protected final boolean g(long i) { // Paper - final to help inlining
+ return this.updating.getUpdatingOptimized(i) != null; // Paper - inline to avoid branching
}
@Nullable
protected NibbleArray a(long i, boolean flag) {
// Paper start - avoid copying light data
if (flag) {
- return this.a(this.f, i);
+ return this.updating.getUpdatingOptimized(i);
} else {
synchronized (this.visibleUpdateLock) {
- return this.a(this.e_visible, i);
+ return this.e_visible.lookup.apply(i);
}
}
// Paper end - avoid copying light data
}
@Nullable
- protected NibbleArray a(M m0, long i) {
+ protected final NibbleArray a(M m0, long i) { // Paper
return m0.c(i);
}
@@ -77,27 +77,57 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
protected abstract int d(long i);
protected int i(long i) {
- long j = SectionPosition.e(i);
- NibbleArray nibblearray = this.a(j, true);
+ // Paper start - reuse and inline math, use Optimized Updating path
+ final int x = (int) (i >> 38);
+ final int y = (int) ((i << 52) >> 52);
+ final int z = (int) ((i << 26) >> 38);
+ long j = SectionPosition.blockPosAsSectionLong(x, y, z);
+ NibbleArray nibblearray = this.updating.getUpdatingOptimized(j);
+ // BUG: Sometimes returns null and crashes, try to recover, but to prevent crash just return no light.
+ if (nibblearray == null) {
+ nibblearray = this.e_visible.lookup.apply(j);
+ }
+ if (nibblearray == null) {
+ System.err.println("Null nibble, preventing crash " + BlockPosition.fromLong(i));
+ return 0;
+ }
- return nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)));
+ return nibblearray.a(x & 15, y & 15, z & 15); // Paper - inline operations
+ // Paper end
}
protected void b(long i, int j) {
- long k = SectionPosition.e(i);
+ // Paper start - cache part of the math done in loop below
+ int x = (int) (i >> 38);
+ int y = (int) ((i << 52) >> 52);
+ int z = (int) ((i << 26) >> 38);
+ long k = SectionPosition.blockPosAsSectionLong(x, y, z);
+ // Paper end
if (this.g.add(k)) {
this.f.a(k);
}
NibbleArray nibblearray = this.a(k, true);
-
- nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)), j);
-
- for (int l = -1; l <= 1; ++l) {
- for (int i1 = -1; i1 <= 1; ++i1) {
- for (int j1 = -1; j1 <= 1; ++j1) {
- this.h.add(SectionPosition.e(BlockPosition.a(i, i1, j1, l)));
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ nibblearray.a(x & 15, y & 15, z & 15, j); // Paper - use already calculated x/y/z
+
+ // Paper start - credit to JellySquid for a major optimization here:
+ /*
+ * An extremely important optimization is made here in regards to adding items to the pending notification set. The
+ * original implementation attempts to add the coordinate of every chunk which contains a neighboring block position
+ * even though a huge number of loop iterations will simply map to block positions within the same updating chunk.
+ *
+ * Our implementation here avoids this by pre-calculating the min/max chunk coordinates so we can iterate over only
+ * the relevant chunk positions once. This reduces what would always be 27 iterations to just 1-8 iterations.
+ *
+ * @reason Use faster implementation
+ * @author JellySquid
+ */
+ for (int z2 = (z - 1) >> 4; z2 <= (z + 1) >> 4; ++z2) {
+ for (int x2 = (x - 1) >> 4; x2 <= (x + 1) >> 4; ++x2) {
+ for (int y2 = (y - 1) >> 4; y2 <= (y + 1) >> 4; ++y2) {
+ this.dirty.add(SectionPosition.asLong(x2, y2, z2));
+ // Paper end
}
}
}
@@ -129,17 +159,23 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
if (k >= 2 && j != 2) {
- if (this.p.contains(i)) {
- this.p.remove(i);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- } else {
+ if (!this.p.remove(i)) { // Paper - remove useless contains - credit to JellySquid
+ //this.p.remove(i); // Paper
+ //} else { // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
this.f.a(i, this.j(i));
this.g.add(i);
this.k(i);
- for (int l = -1; l <= 1; ++l) {
- for (int i1 = -1; i1 <= 1; ++i1) {
- for (int j1 = -1; j1 <= 1; ++j1) {
- this.h.add(SectionPosition.e(BlockPosition.a(i, i1, j1, l)));
+ // Paper start - reuse x/y/z and only notify valid chunks - Credit to JellySquid (See above method for notes)
+ int x = (int) (i >> 38);
+ int y = (int) ((i << 52) >> 52);
+ int z = (int) ((i << 26) >> 38);
+
+ for (int z2 = (z - 1) >> 4; z2 <= (z + 1) >> 4; ++z2) {
+ for (int x2 = (x - 1) >> 4; x2 <= (x + 1) >> 4; ++x2) {
+ for (int y2 = (y - 1) >> 4; y2 <= (y + 1) >> 4; ++y2) {
+ this.dirty.add(SectionPosition.asLong(x2, y2, z2));
+ // Paper end
}
}
}
@@ -165,9 +201,9 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
return SectionPosition.e(j) == i;
});
} else {
- int j = SectionPosition.c(SectionPosition.b(i));
- int k = SectionPosition.c(SectionPosition.c(i));
- int l = SectionPosition.c(SectionPosition.d(i));
+ int j = (int) (i >> 42) << 4; // Paper - inline
+ int k = (int) (i << 44 >> 44) << 4; // Paper - inline
+ int l = (int) (i << 22 >> 42) << 4; // Paper - inline
for (int i1 = 0; i1 < 16; ++i1) {
for (int j1 = 0; j1 < 16; ++j1) {
@@ -194,7 +230,7 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
NibbleArray nibblearray;
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
this.a(lightenginelayer, i);
NibbleArray nibblearray1 = (NibbleArray) this.i.remove(i);
@@ -212,7 +248,7 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
longiterator = this.p.iterator();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
this.l(i);
}
@@ -223,12 +259,13 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
Entry entry;
long j;
+ NibbleArray test = null; // Paper
while (objectiterator.hasNext()) {
entry = (Entry) objectiterator.next();
j = entry.getLongKey();
- if (this.g(j)) {
+ if ((test = this.updating.getUpdatingOptimized(j)) != null) { // Paper - dont look up nibble twice
nibblearray = (NibbleArray) entry.getValue();
- if (this.f.c(j) != nibblearray) {
+ if (test != nibblearray) { // Paper
this.a(lightenginelayer, j);
this.f.a(j, nibblearray);
this.g.add(j);
@@ -241,14 +278,14 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
longiterator = this.i.keySet().iterator();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
this.b(lightenginelayer, i);
}
} else {
longiterator = this.n.iterator();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
this.b(lightenginelayer, i);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
}
@@ -269,15 +306,20 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
private void b(LightEngineLayer<M, ?> lightenginelayer, long i) {
if (this.g(i)) {
- int j = SectionPosition.c(SectionPosition.b(i));
- int k = SectionPosition.c(SectionPosition.c(i));
- int l = SectionPosition.c(SectionPosition.d(i));
+ // Paper start
+ int secX = (int) (i >> 42);
+ int secY = (int) (i << 44 >> 44);
+ int secZ = (int) (i << 22 >> 42);
+ int j = secX << 4; // baseX
+ int k = secY << 4; // baseY
+ int l = secZ << 4; // baseZ
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
EnumDirection[] aenumdirection = LightEngineStorage.k;
int i1 = aenumdirection.length;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
for (int j1 = 0; j1 < i1; ++j1) {
EnumDirection enumdirection = aenumdirection[j1];
- long k1 = SectionPosition.a(i, enumdirection);
+ long k1 = SectionPosition.getAdjacentFromSectionPos(secX, secY, secZ, enumdirection); // Paper - avoid extra unpacking
if (!this.i.containsKey(k1) && this.g(k1)) {
for (int l1 = 0; l1 < 16; ++l1) {
diff --git a/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageArray.java b/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageArray.java
index 0ab4917053c090c874e8bb59e750fd909636f89d..a994e7e93814796d1a5d86f8863d1d21bae51022 100644
--- a/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageArray.java
+++ b/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageArray.java
@@ -6,13 +6,18 @@ import net.minecraft.world.level.chunk.NibbleArray;
public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<M>> {
- private final long[] b = new long[2];
- private final NibbleArray[] c = new NibbleArray[2];
+ // private final long[] b = new long[2]; // Paper - unused
+ private final NibbleArray[] c = new NibbleArray[]{NibbleArray.EMPTY_NIBBLE_ARRAY, NibbleArray.EMPTY_NIBBLE_ARRAY}; private final NibbleArray[] cache = c; // Paper - OBFHELPER
private boolean d;
protected final com.destroystokyo.paper.util.map.QueuedChangesMapLong2Object<NibbleArray> data; // Paper - avoid copying light data
protected final boolean isVisible; // Paper - avoid copying light data
- java.util.function.Function<Long, NibbleArray> lookup; // Paper - faster branchless lookup
+ // Paper start - faster lookups with less branching, use interface to avoid boxing instead of Function
+ public final NibbleArrayAccess lookup;
+ public interface NibbleArrayAccess {
+ NibbleArray apply(long id);
+ }
+ // Paper end
// Paper start - avoid copying light data
protected LightEngineStorageArray(com.destroystokyo.paper.util.map.QueuedChangesMapLong2Object<NibbleArray> data, boolean isVisible) {
if (isVisible) {
@@ -20,12 +25,14 @@ public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<
}
this.data = data;
this.isVisible = isVisible;
+ // Paper end - avoid copying light data
+ // Paper start - faster lookups with less branching
if (isVisible) {
lookup = data::getVisibleAsync;
} else {
- lookup = data::getUpdating;
+ lookup = data.getUpdatingMap()::get; // jump straight the sub map
}
- // Paper end - avoid copying light data
+ // Paper end
this.c();
this.d = true;
}
@@ -35,7 +42,9 @@ public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<
public void a(long i) {
if (this.isVisible) { throw new IllegalStateException("writing to visible data"); } // Paper - avoid copying light data
NibbleArray updating = this.data.getUpdating(i); // Paper - pool nibbles
- this.data.queueUpdate(i, new NibbleArray().markPoolSafe(updating.getCloneIfSet())); // Paper - avoid copying light data - pool safe clone
+ NibbleArray nibblearray = new NibbleArray().markPoolSafe(updating.getCloneIfSet()); // Paper
+ nibblearray.lightCacheKey = i; // Paper
+ this.data.queueUpdate(i, nibblearray); // Paper - avoid copying light data - pool safe clone
if (updating.cleaner != null) MCUtil.scheduleTask(2, updating.cleaner, "Light Engine Release"); // Paper - delay clean incase anything holding ref was still using it
this.c();
}
@@ -44,34 +53,34 @@ public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
return lookup.apply(i) != null; // Paper - avoid copying light data
}
- @Nullable
- public final NibbleArray c(long i) { // Paper - final
- if (this.d) {
- for (int j = 0; j < 2; ++j) {
- if (i == this.b[j]) {
- return this.c[j];
- }
- }
- }
-
- NibbleArray nibblearray = lookup.apply(i); // Paper - avoid copying light data
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper start - less branching as we know we are using cache and updating
+ public final NibbleArray getUpdatingOptimized(final long i) { // Paper - final
+ final NibbleArray[] cache = this.cache;
+ if (cache[0].lightCacheKey == i) return cache[0];
+ if (cache[1].lightCacheKey == i) return cache[1];
+ final NibbleArray nibblearray = this.lookup.apply(i); // Paper - avoid copying light data
if (nibblearray == null) {
return null;
} else {
- if (this.d) {
- for (int k = 1; k > 0; --k) {
- this.b[k] = this.b[k - 1];
- this.c[k] = this.c[k - 1];
- }
-
- this.b[0] = i;
- this.c[0] = nibblearray;
- }
-
+ cache[1] = cache[0];
+ cache[0] = nibblearray;
return nibblearray;
}
}
+ // Paper end
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+
+ @Nullable
+ public final NibbleArray c(final long i) { // Paper - final
+ // Paper start - optimize visible case or missed updating cases
+ if (this.d) {
+ // short circuit to optimized
+ return getUpdatingOptimized(i);
+ }
+
+ return this.lookup.apply(i);
+ // Paper end
+ }
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
@Nullable
public NibbleArray d(long i) {
@@ -81,13 +90,14 @@ public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<
public void a(long i, NibbleArray nibblearray) {
if (this.isVisible) { throw new IllegalStateException("writing to visible data"); } // Paper - avoid copying light data
+ nibblearray.lightCacheKey = i; // Paper
this.data.queueUpdate(i, nibblearray); // Paper - avoid copying light data
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public void c() {
for (int i = 0; i < 2; ++i) {
- this.b[i] = Long.MAX_VALUE;
- this.c[i] = null;
+ // this.b[i] = Long.MAX_VALUE; // Paper - Unused
+ this.c[i] = NibbleArray.EMPTY_NIBBLE_ARRAY; // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
}
diff --git a/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageBlock.java b/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageBlock.java
index bba0dda3b8a4740bb18b9a1fc1821dc4e5890fb7..3a3ab3f0a56127bda368b0d392406a078e85ecea 100644
--- a/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageBlock.java
+++ b/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageBlock.java
@@ -15,10 +15,14 @@ public class LightEngineStorageBlock extends LightEngineStorage<LightEngineStora
@Override
protected int d(long i) {
- long j = SectionPosition.e(i);
- NibbleArray nibblearray = this.a(j, false);
-
- return nibblearray == null ? 0 : nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)));
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long j = (((long) (baseX >> 4) & 4194303L) << 42) | (((long) (baseY >> 4) & 1048575L)) | (((long) (baseZ >> 4) & 4194303L) << 20);
+ NibbleArray nibblearray = this.e_visible.lookup.apply(j);
+ return nibblearray == null ? 0 : nibblearray.a(baseX & 15, baseY & 15, baseZ & 15);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
}
public static final class a extends LightEngineStorageArray<LightEngineStorageBlock.a> {
diff --git a/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageSky.java b/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageSky.java
index 488403a6765598317faedc2d600ae82238e99e39..6d31b19c851081a37e6fcefdcdfcb7018fce6b26 100644
--- a/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageSky.java
+++ b/src/main/java/net/minecraft/world/level/lighting/LightEngineStorageSky.java
@@ -28,7 +28,12 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
@Override
protected int d(long i) {
- long j = SectionPosition.e(i);
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long j = SectionPosition.blockPosAsSectionLong(baseX, baseY, baseZ);
+ // Paper end
int k = SectionPosition.c(j);
synchronized (this.visibleUpdateLock) { // Paper - avoid copying light data
LightEngineStorageSky.a lightenginestoragesky_a = (LightEngineStorageSky.a) this.e_visible; // Paper - avoid copying light data - must be after lock acquire
@@ -49,7 +54,7 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
}
}
- return nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)));
+ return nibblearray.a(baseX & 15, (int) ((i << 52) >> 52) & 15, (int) baseZ & 15); // Paper - y changed above
} else {
return 15;
}
@@ -168,7 +173,7 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
if (k != ((LightEngineStorageSky.a) this.f).b && SectionPosition.c(j) < k) {
NibbleArray nibblearray1;
- while ((nibblearray1 = this.a(j, true)) == null) {
+ while ((nibblearray1 = this.updating.getUpdatingOptimized(j)) == null) { // Paper
j = SectionPosition.a(j, EnumDirection.UP);
}
@@ -192,7 +197,10 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
longiterator = this.m.iterator();
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
+ int baseX = (int) (i >> 42) << 4; // Paper
+ int baseY = (int) (i << 44 >> 44) << 4; // Paper
+ int baseZ = (int) (i << 22 >> 42) << 4; // Paper
j = this.c(i);
if (j != 2 && !this.n.contains(i) && this.l.add(i)) {
int l;
@@ -203,10 +211,10 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
((LightEngineStorageSky.a) this.f).a(i);
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- Arrays.fill(this.a(i, true).asBytesPoolSafe(), (byte) -1); // Paper
- k = SectionPosition.c(SectionPosition.b(i));
- l = SectionPosition.c(SectionPosition.c(i));
- int i1 = SectionPosition.c(SectionPosition.d(i));
+ Arrays.fill(this.updating.getUpdatingOptimized(i).asBytesPoolSafe(), (byte) -1); // Paper - use optimized
+ k = baseX; // Paper
+ l = baseY; // Paper
+ int i1 = baseZ; // Paper
EnumDirection[] aenumdirection = LightEngineStorageSky.k;
int j1 = aenumdirection.length;
@@ -215,7 +223,7 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
for (int l1 = 0; l1 < j1; ++l1) {
EnumDirection enumdirection = aenumdirection[l1];
- k1 = SectionPosition.a(i, enumdirection);
+ k1 = SectionPosition.getAdjacentFromBlockPos(baseX, baseY, baseZ, enumdirection); // Paper
if ((this.n.contains(k1) || !this.l.contains(k1) && !this.m.contains(k1)) && this.g(k1)) {
for (int i2 = 0; i2 < 16; ++i2) {
for (int j2 = 0; j2 < 16; ++j2) {
@@ -248,16 +256,16 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
for (int i3 = 0; i3 < 16; ++i3) {
for (j1 = 0; j1 < 16; ++j1) {
- long j3 = BlockPosition.a(SectionPosition.c(SectionPosition.b(i)) + i3, SectionPosition.c(SectionPosition.c(i)), SectionPosition.c(SectionPosition.d(i)) + j1);
+ long j3 = BlockPosition.a(baseX + i3, baseY, baseZ + j1); // Paper
- k1 = BlockPosition.a(SectionPosition.c(SectionPosition.b(i)) + i3, SectionPosition.c(SectionPosition.c(i)) - 1, SectionPosition.c(SectionPosition.d(i)) + j1);
+ k1 = BlockPosition.a(baseX + i3, baseY - 1, baseZ + j1); // Paper
lightenginelayer.a(j3, k1, lightenginelayer.b(j3, k1, 0), true);
}
}
} else {
for (k = 0; k < 16; ++k) {
for (l = 0; l < 16; ++l) {
- long k3 = BlockPosition.a(SectionPosition.c(SectionPosition.b(i)) + k, SectionPosition.c(SectionPosition.c(i)) + 16 - 1, SectionPosition.c(SectionPosition.d(i)) + l);
+ long k3 = BlockPosition.a(baseX + k, baseY + 16 - 1, baseZ + l); // Paper
lightenginelayer.a(Long.MAX_VALUE, k3, 0, true);
}
@@ -272,11 +280,14 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
longiterator = this.n.iterator();
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
+ int baseX = (int) (i >> 42) << 4; // Paper
+ int baseY = (int) (i << 44 >> 44) << 4; // Paper
+ int baseZ = (int) (i << 22 >> 42) << 4; // Paper
if (this.l.remove(i) && this.g(i)) {
for (j = 0; j < 16; ++j) {
for (k = 0; k < 16; ++k) {
- long l3 = BlockPosition.a(SectionPosition.c(SectionPosition.b(i)) + j, SectionPosition.c(SectionPosition.c(i)) + 16 - 1, SectionPosition.c(SectionPosition.d(i)) + k);
+ long l3 = BlockPosition.a(baseX + j, baseY + 16 - 1, baseZ + k); // Paper
lightenginelayer.a(Long.MAX_VALUE, l3, 15, false);
}