+++ /dev/null
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright [yyyy] [name of copyright owner]
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
----------------
-
-Licenses for dependency projects can be found here:
-[http://akka.io/docs/akka/snapshot/project/licenses.html]
-
----------------
-
-akka-protobuf contains the sources of Google protobuf 2.5.0 runtime support,
-moved into the source package `akka.protobuf` so as to avoid version conflicts.
-For license information see COPYING.protobuf
<!-- Repackaged Akka -->
<dependency>
<groupId>${project.groupId}</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
<version>${project.version}</version>
</dependency>
<!-- Config files -->
<dependency>
- <!-- finalname="configuration/initial/akka.conf" -->
+ <!-- finalname="configuration/initial/pekko.conf" -->
<groupId>${project.groupId}</groupId>
<artifactId>sal-clustering-config</artifactId>
<version>${project.version}</version>
<type>xml</type>
- <classifier>akkaconf</classifier>
+ <classifier>pekkoconf</classifier>
</dependency>
<dependency>
- <!-- finalname="configuration/factory/akka.conf" override="true" -->
+ <!-- finalname="configuration/factory/pekko.conf" override="true" -->
<groupId>${project.groupId}</groupId>
<artifactId>sal-clustering-config</artifactId>
<version>${project.version}</version>
<type>xml</type>
- <classifier>factoryakkaconf</classifier>
+ <classifier>factorypekkoconf</classifier>
</dependency>
<dependency>
<!-- finalname="configuration/initial/module-shards.conf" -->
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
<!-- includes all dependencies to class path -->
<classpath/>
<argument>org.opendaylight.controller.akka.segjournal.BenchmarkMain</argument>
- <!-- configuration taken from factory-akka.conf of sal-clustering-config -->
+ <!-- configuration taken from factory-pekko.conf of sal-clustering-config -->
<argument>--current</argument>
<!-- 100_000 messages to write -->
<argument>-n100000</argument>
import static org.opendaylight.controller.akka.segjournal.BenchmarkUtils.formatNanos;
import static org.opendaylight.controller.akka.segjournal.BenchmarkUtils.toMetricId;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.persistence.AtomicWrite;
-import akka.persistence.PersistentRepr;
import com.google.common.base.Stopwatch;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
import java.io.Serializable;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
import org.apache.commons.io.FileUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.persistence.AtomicWrite;
+import org.apache.pekko.persistence.PersistentRepr;
import org.opendaylight.controller.akka.segjournal.BenchmarkUtils.BenchmarkConfig;
import org.opendaylight.controller.akka.segjournal.SegmentedJournalActor.WriteMessages;
import org.opendaylight.controller.cluster.common.actor.MeteringBehavior;
static final String BENCHMARK_PAYLOAD_SIZE = "payload-size";
static final String BENCHMARK_PAYLOAD_SIZE_DEFAULT = "10K";
- static final String CURRENT_CONFIG_RESOURCE = "/initial/factory-akka.conf";
+ static final String CURRENT_CONFIG_RESOURCE = "/initial/factory-pekko.conf";
static final String CURRENT_CONFIG_PATH = "odl-cluster-data.akka.persistence.journal.segmented-file";
private static final String[] BYTE_SFX = {"G", "M", "K"};
</dependency>
<!-- Configuration library -->
- <!-- This needs to be kept in sync with the version used by akka -->
+ <!-- This needs to be kept in sync with the version used by pekko -->
<dependency>
<groupId>com.typesafe</groupId>
<artifactId>config</artifactId>
- <version>1.4.2</version>
+ <version>1.4.3</version>
</dependency>
<dependency>
<groupId>com.typesafe</groupId>
<artifactId>ssl-config-core_2.13</artifactId>
- <version>0.4.3</version>
+ <version>0.6.1</version>
</dependency>
- <!-- Akka testkit -->
+ <!-- Pekko testkit -->
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
+ <version>1.0.2</version>
<scope>test</scope>
<exclusions>
<exclusion>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-actor_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-actor_2.13</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-actor-testkit-typed_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-actor-testkit-typed_2.13</artifactId>
+ <version>1.0.2</version>
<scope>test</scope>
<exclusions>
<exclusion>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-actor-typed_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-actor-typed_2.13</artifactId>
</exclusion>
<exclusion>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-slf4j_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-slf4j_2.13</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-persistence-tck_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-persistence-tck_2.13</artifactId>
+ <version>1.0.2</version>
<scope>test</scope>
<exclusions>
<exclusion>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-persistence_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-persistence_2.13</artifactId>
</exclusion>
</exclusions>
</dependency>
- <!-- Reactive Streams, used by Akka -->
+ <!-- Reactive Streams, used by Pekko -->
<dependency>
<groupId>org.reactivestreams</groupId>
<artifactId>reactive-streams</artifactId>
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
</dependencies>
</project>
<features xmlns="http://karaf.apache.org/xmlns/features/v1.6.0" name="odl-controller-akka">
<feature version="0.0.0">
<feature>odl-controller-scala</feature>
- <bundle>mvn:com.typesafe/config/1.4.2</bundle>
- <bundle>mvn:com.typesafe/ssl-config-core_2.13/0.4.3</bundle>
+ <bundle>mvn:com.typesafe/config/1.4.3</bundle>
+ <bundle>mvn:com.typesafe/ssl-config-core_2.13/0.6.1</bundle>
<bundle>mvn:io.aeron/aeron-client/1.38.1</bundle>
<bundle>mvn:io.aeron/aeron-driver/1.38.1</bundle>
<bundle>mvn:org.agrona/agrona/1.15.2</bundle>
- <bundle>mvn:org.opendaylight.controller/repackaged-akka/${project.version}</bundle>
+ <bundle>mvn:org.opendaylight.controller/repackaged-pekko/${project.version}</bundle>
<bundle>mvn:org.reactivestreams/reactive-streams/1.0.4</bundle>
<feature>wrap</feature>
<bundle>wrap:mvn:org.lmdbjava/lmdbjava/0.7.0</bundle>
</dependency>
<dependency>
- <!-- finalname="configuration/initial/akka.conf" -->
+ <!-- finalname="configuration/initial/pekko.conf" -->
<groupId>org.opendaylight.controller</groupId>
<artifactId>sal-clustering-config</artifactId>
<type>xml</type>
- <classifier>akkaconf</classifier>
+ <classifier>pekkoconf</classifier>
</dependency>
<dependency>
- <!-- finalname="configuration/factory/akka.conf" override="true" -->
+ <!-- finalname="configuration/factory/pekko.conf" override="true" -->
<groupId>org.opendaylight.controller</groupId>
<artifactId>sal-clustering-config</artifactId>
<type>xml</type>
- <classifier>factoryakkaconf</classifier>
+ <classifier>factorypekkoconf</classifier>
</dependency>
<dependency>
<!-- finalname="configuration/initial/module-shards.conf" -->
<feature version="[14,15)">odl-mdsal-eos-dom</feature>
<feature version="[14,15)">odl-mdsal-dom-broker</feature>
<feature version="[14,15)">odl-mdsal-binding-dom-adapter</feature>
- <configfile finalname="configuration/initial/akka.conf">
- mvn:org.opendaylight.controller/sal-clustering-config/${project.version}/xml/akkaconf
+ <configfile finalname="configuration/initial/pekko.conf">
+ mvn:org.opendaylight.controller/sal-clustering-config/${project.version}/xml/pekkoconf
</configfile>
- <configfile finalname="configuration/factory/akka.conf" override="true">
- mvn:org.opendaylight.controller/sal-clustering-config/${project.version}/xml/factoryakkaconf
+ <configfile finalname="configuration/factory/pekko.conf" override="true">
+ mvn:org.opendaylight.controller/sal-clustering-config/${project.version}/xml/factorypekkoconf
</configfile>
<configfile finalname="configuration/initial/module-shards.conf">
mvn:org.opendaylight.controller/sal-clustering-config/${project.version}/xml/moduleshardconf
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<scope>test</scope>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
</dependencies>
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.Request;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects.ToStringHelper;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects.ToStringHelper;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects.ToStringHelper;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.ObjectStreamException;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects.ToStringHelper;
import java.io.DataInput;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.access.concepts.Request;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.serialization.JavaSerializer;
-import akka.serialization.Serialization;
import com.google.common.base.MoreObjects.ToStringHelper;
import com.google.common.collect.ImmutableList;
import java.io.DataInput;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.serialization.Serialization;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.LocalHistoryIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.LocalHistoryIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.IOException;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import static com.google.common.base.Preconditions.checkArgument;
-import akka.actor.ActorRef;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.yangtools.concepts.WritableObjects;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import com.google.common.base.Preconditions;
import java.io.DataInput;
import java.io.IOException;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.LocalHistoryIdentifier;
import org.opendaylight.controller.cluster.access.concepts.Request;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects.ToStringHelper;
import com.google.common.collect.ImmutableList;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.SliceableMessage;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import static com.google.common.base.Preconditions.checkState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import java.util.ArrayList;
import java.util.List;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.yangtools.concepts.Identifiable;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.concepts.RequestException;
/**
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.LocalHistoryIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.IOException;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects.ToStringHelper;
import com.google.common.collect.ImmutableList;
import com.google.common.primitives.UnsignedLong;
import java.io.ObjectOutput;
import java.util.Collection;
import java.util.List;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.ObjectInput;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
*/
package org.opendaylight.controller.cluster.access.commands;
-import akka.actor.ActorRef;
import java.io.DataInput;
import java.io.IOException;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.concepts.Request;
import org.opendaylight.controller.cluster.access.concepts.RequestException;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.serialization.JavaSerializer;
-import akka.serialization.Serialization;
import com.google.common.base.MoreObjects.ToStringHelper;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.serialization.Serialization;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.yangtools.concepts.WritableIdentifier;
*/
package org.opendaylight.controller.cluster.access.concepts;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
public final class RequestEnvelope extends Envelope<Request<?, ?>> {
@java.io.Serial
import static org.junit.Assert.assertArrayEquals;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.ExtendedActorSystem;
-import akka.serialization.JavaSerializer;
-import akka.testkit.TestProbe;
import com.google.common.base.MoreObjects;
import com.google.common.collect.ImmutableList;
import java.util.List;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.ABIVersion;
ALTERNATES, TREE, MAX_MESSAGES);
public ConnectClientSuccessTest() {
- super(OBJECT, 146 + ACTOR_REF.path().toSerializationFormat().length());
+ super(OBJECT, 147 + ACTOR_REF.path().toSerializationFormat().length());
}
@Before
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.testkit.TestActors;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.testkit.TestActors;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
import org.junit.Assert;
import org.opendaylight.controller.cluster.access.concepts.RequestException;
import org.opendaylight.controller.cluster.access.concepts.RequestExceptionTest;
public class NotLeaderExceptionTest extends RequestExceptionTest<NotLeaderException> {
private static final ActorSystem ACTOR_SYSTEM = ActorSystem.apply();
- private static final ActorRef ACTOR = new akka.testkit.TestProbe(ACTOR_SYSTEM).testActor();
+ private static final ActorRef ACTOR = new org.apache.pekko.testkit.TestProbe(ACTOR_SYSTEM).testActor();
private static final RequestException OBJECT = new NotLeaderException(ACTOR);
@Override
import static org.hamcrest.MatcherAssert.assertThat;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.ExtendedActorSystem;
-import akka.serialization.JavaSerializer;
-import akka.testkit.TestProbe;
import com.google.common.base.MoreObjects;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.Before;
import org.junit.Test;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.ExtendedActorSystem;
-import akka.serialization.JavaSerializer;
-import akka.testkit.TestProbe;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.After;
import org.junit.Before;
import org.opendaylight.controller.cluster.access.commands.TransactionPurgeRequest;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
import org.opendaylight.controller.cluster.access.commands.CreateLocalHistoryRequest;
public class UnsupportedRequestExceptionTest extends RequestExceptionTest<UnsupportedRequestException> {
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
<scope>test</scope>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
*/
package org.opendaylight.controller.cluster.access.client;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.persistence.AbstractPersistentActor;
import com.google.common.annotations.VisibleForTesting;
import java.nio.file.Path;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.persistence.AbstractPersistentActor;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.concepts.FrontendIdentifier;
import org.slf4j.Logger;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.yangtools.concepts.Mutable;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.MoreObjects;
import com.google.common.base.MoreObjects.ToStringHelper;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.checkerframework.checker.lock.qual.Holding;
import org.eclipse.jdt.annotation.NonNull;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects;
import com.google.common.base.MoreObjects.ToStringHelper;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
/**
*/
package org.opendaylight.controller.cluster.access.client;
-import akka.actor.ActorRef;
import java.util.concurrent.CompletionStage;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.yangtools.concepts.Registration;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Cancellable;
-import akka.actor.Scheduler;
import com.google.common.base.Ticker;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Cancellable;
+import org.apache.pekko.actor.Scheduler;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.common.actor.Dispatchers;
import static java.util.Objects.requireNonNull;
-import akka.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
/**
*/
package org.opendaylight.controller.cluster.access.client;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.ControlMessage;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import static java.util.Objects.requireNonNull;
-import akka.persistence.RecoveryCompleted;
-import akka.persistence.SnapshotOffer;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.nio.file.StandardOpenOption;
import java.util.Properties;
+import org.apache.pekko.persistence.RecoveryCompleted;
+import org.apache.pekko.persistence.SnapshotOffer;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.access.concepts.FrontendIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.persistence.DeleteSnapshotsFailure;
-import akka.persistence.DeleteSnapshotsSuccess;
-import akka.persistence.SaveSnapshotFailure;
-import akka.persistence.SaveSnapshotSuccess;
-import akka.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.persistence.DeleteSnapshotsFailure;
+import org.apache.pekko.persistence.DeleteSnapshotsSuccess;
+import org.apache.pekko.persistence.SaveSnapshotFailure;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import static com.google.common.base.Verify.verify;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.annotations.VisibleForTesting;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.ArrayDeque;
import java.util.List;
import java.util.Optional;
import java.util.Queue;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.concepts.Request;
import org.opendaylight.controller.cluster.access.concepts.RequestEnvelope;
import org.opendaylight.controller.cluster.access.concepts.Response;
import static org.mockito.Mockito.doReturn;
-import akka.actor.ActorRef;
import com.google.common.testing.FakeTicker;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Before;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
import static org.mockito.Mockito.verify;
import static org.opendaylight.controller.cluster.access.client.ConnectionEntryMatcher.entryWithRequest;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.google.common.collect.Iterables;
import java.util.OptionalLong;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.mock;
import static org.opendaylight.controller.cluster.access.client.ConnectionEntryMatcher.entryWithRequest;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Ticker;
import java.util.Collection;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.spy;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.access.concepts.Request;
import org.opendaylight.controller.cluster.access.concepts.Response;
import static org.mockito.Mockito.timeout;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.persistence.Persistence;
-import akka.persistence.SelectedSnapshot;
-import akka.persistence.SnapshotMetadata;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
import java.io.File;
import java.io.IOException;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.persistence.Persistence;
+import org.apache.pekko.persistence.SelectedSnapshot;
+import org.apache.pekko.persistence.SnapshotMetadata;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.junit.Assert.assertSame;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Ticker;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
import com.google.common.testing.FakeTicker;
import java.util.OptionalLong;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
import com.google.common.testing.FakeTicker;
import java.util.concurrent.ThreadLocalRandom;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.dispatch.OnComplete;
-import akka.pattern.Patterns;
-import akka.persistence.SelectedSnapshot;
-import akka.persistence.SnapshotMetadata;
-import akka.persistence.SnapshotSelectionCriteria;
-import akka.persistence.snapshot.japi.SnapshotStore;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.persistence.SelectedSnapshot;
+import org.apache.pekko.persistence.SnapshotMetadata;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.persistence.snapshot.japi.SnapshotStore;
import scala.concurrent.Future;
import scala.concurrent.Promise;
-akka {
+pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
- loggers = ["akka.testkit.TestEventListener", "akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.testkit.TestEventListener", "org.apache.pekko.event.slf4j.Slf4jLogger"]
}
in-memory-journal {
- class = "akka.persistence.journal.inmem.InmemJournal"
+ class = "org.apache.pekko.persistence.journal.inmem.InmemJournal"
}
in-memory-snapshot-store {
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.access.client.MockedSnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
\ No newline at end of file
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-actor-testkit-typed_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-actor-testkit-typed_2.13</artifactId>
</dependency>
<dependency>
<groupId>org.awaitility</groupId>
*/
package org.opendaylight.controller.eos.akka;
-import akka.actor.ActorSystem;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Scheduler;
-import akka.actor.typed.javadsl.Adapter;
-import akka.actor.typed.javadsl.AskPattern;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.cluster.typed.Cluster;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.SettableFuture;
import javax.annotation.PreDestroy;
import javax.inject.Inject;
import javax.inject.Singleton;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Scheduler;
+import org.apache.pekko.actor.typed.javadsl.Adapter;
+import org.apache.pekko.actor.typed.javadsl.AskPattern;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.cluster.typed.Cluster;
import org.opendaylight.controller.cluster.ActorSystemProvider;
import org.opendaylight.controller.eos.akka.bootstrap.EOSMain;
import org.opendaylight.controller.eos.akka.bootstrap.command.BootstrapCommand;
import org.slf4j.LoggerFactory;
/**
- * DOMEntityOwnershipService implementation backed by native Akka clustering constructs. We use distributed-data
+ * DOMEntityOwnershipService implementation backed by native Pekko clustering constructs. We use distributed-data
* to track all registered candidates and cluster-singleton to maintain a single cluster-wide authority which selects
* the appropriate owners.
*/
*/
package org.opendaylight.controller.eos.akka.bootstrap;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.SupervisorStrategy;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.typed.Cluster;
-import akka.cluster.typed.ClusterSingleton;
-import akka.cluster.typed.SingletonActor;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.SupervisorStrategy;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.typed.Cluster;
+import org.apache.pekko.cluster.typed.ClusterSingleton;
+import org.apache.pekko.cluster.typed.SingletonActor;
import org.opendaylight.controller.eos.akka.bootstrap.command.BootstrapCommand;
import org.opendaylight.controller.eos.akka.bootstrap.command.GetRunningContext;
import org.opendaylight.controller.eos.akka.bootstrap.command.RunningContext;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.ActorRef;
public final class GetRunningContext extends BootstrapCommand {
private final ActorRef<RunningContext> replyTo;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.eos.akka.owner.checker.command.StateCheckerCommand;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.OwnerSupervisorCommand;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.yangtools.yang.common.Empty;
import static com.google.common.base.Verify.verifyNotNull;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.AskPattern;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.LWWRegisterKey;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import java.time.Duration;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.CompletionStage;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.AskPattern;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.LWWRegisterKey;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import org.opendaylight.controller.eos.akka.owner.checker.command.AbstractEntityRequest;
import org.opendaylight.controller.eos.akka.owner.checker.command.GetCandidates;
import org.opendaylight.controller.eos.akka.owner.checker.command.GetCandidatesForEntity;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.LWWRegisterKey;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
-import akka.cluster.ddata.typed.javadsl.Replicator.Get;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetFailure;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetSuccess;
-import akka.cluster.ddata.typed.javadsl.Replicator.NotFound;
-import akka.cluster.ddata.typed.javadsl.Replicator.ReadMajority;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import java.time.Duration;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.LWWRegisterKey;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.Get;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetFailure;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetSuccess;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.NotFound;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.ReadMajority;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import org.opendaylight.controller.eos.akka.owner.checker.command.GetEntitiesRequest;
import org.opendaylight.controller.eos.akka.owner.checker.command.GetEntityOwnerRequest;
import org.opendaylight.controller.eos.akka.owner.checker.command.GetEntityRequest;
*/
package org.opendaylight.controller.eos.akka.owner.checker.command;
-import akka.actor.typed.ActorRef;
import com.google.common.base.MoreObjects;
+import org.apache.pekko.actor.typed.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.controller.entity.owners.norev.EntityId;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.controller.entity.owners.norev.EntityName;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
*/
package org.opendaylight.controller.eos.akka.owner.checker.command;
-import akka.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.ActorRef;
public final class GetEntitiesRequest extends StateCheckerRequest<GetEntitiesReply> {
private static final long serialVersionUID = 1L;
*/
package org.opendaylight.controller.eos.akka.owner.checker.command;
-import akka.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.ActorRef;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.controller.entity.owners.norev.EntityId;
public final class GetEntityOwnerRequest extends AbstractEntityRequest<GetEntityOwnerReply> {
*/
package org.opendaylight.controller.eos.akka.owner.checker.command;
-import akka.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.ActorRef;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.controller.entity.owners.norev.EntityId;
public final class GetEntityRequest extends AbstractEntityRequest<GetEntityReply> {
*/
package org.opendaylight.controller.eos.akka.owner.checker.command;
-import akka.actor.typed.ActorRef;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.NonNull;
public class OwnerDataResponse extends StateCheckerCommand {
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
import java.io.Serializable;
+import org.apache.pekko.actor.typed.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
public abstract class StateCheckerRequest<T extends StateCheckerReply> extends StateCheckerCommand
*/
package org.opendaylight.controller.eos.akka.owner.supervisor;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.time.Duration;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidates;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesForMember;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesResponse;
*/
package org.opendaylight.controller.eos.akka.owner.supervisor;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.SelfUniqueAddress;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import java.time.Duration;
import java.util.Map;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.SelfUniqueAddress;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidates;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesResponse;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesUpdateResponse;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.Member;
-import akka.cluster.typed.Cluster;
-import akka.pattern.StatusReply;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.typed.Cluster;
+import org.apache.pekko.pattern.StatusReply;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ActivateDataCenter;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidates;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesForMember;
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.ClusterEvent;
-import akka.cluster.ClusterEvent.CurrentClusterState;
-import akka.cluster.Member;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.LWWRegisterKey;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.SelfUniqueAddress;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
-import akka.cluster.typed.Cluster;
-import akka.cluster.typed.Subscribe;
-import akka.pattern.StatusReply;
import com.google.common.collect.HashMultimap;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import java.util.function.BiPredicate;
import java.util.stream.Collectors;
import java.util.stream.StreamSupport;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.ClusterEvent;
+import org.apache.pekko.cluster.ClusterEvent.CurrentClusterState;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.LWWRegisterKey;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.SelfUniqueAddress;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
+import org.apache.pekko.cluster.typed.Cluster;
+import org.apache.pekko.cluster.typed.Subscribe;
+import org.apache.pekko.pattern.StatusReply;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.AbstractEntityRequest;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.CandidatesChanged;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidates;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.LWWRegisterKey;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
-import akka.pattern.StatusReply;
import java.time.Duration;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.LWWRegisterKey;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
+import org.apache.pekko.pattern.StatusReply;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidates;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesForMember;
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.typed.ActorRef;
-import akka.pattern.StatusReply;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.pattern.StatusReply;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.controller.entity.owners.norev.EntityId;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.controller.entity.owners.norev.EntityName;
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.typed.ActorRef;
import java.io.Serializable;
+import org.apache.pekko.actor.typed.ActorRef;
import org.eclipse.jdt.annotation.Nullable;
public final class ActivateDataCenter extends OwnerSupervisorCommand implements Serializable {
import static java.util.Objects.requireNonNull;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.Replicator.SubscribeResponse;
import com.google.common.base.MoreObjects;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.SubscribeResponse;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
public class ClearCandidates extends OwnerSupervisorCommand {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.typed.ActorRef;
import java.io.Serializable;
+import org.apache.pekko.actor.typed.ActorRef;
/**
* Request sent from Candidate registration actors to clear the candidate from all entities. Issued at start to clear
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.typed.ActorRef;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
public class ClearCandidatesUpdateResponse extends OwnerSupervisorCommand {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.typed.ActorRef;
import java.io.Serializable;
+import org.apache.pekko.actor.typed.ActorRef;
import org.eclipse.jdt.annotation.Nullable;
public final class DeactivateDataCenter extends OwnerSupervisorCommand implements Serializable {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.typed.ActorRef;
-import akka.pattern.StatusReply;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.pattern.StatusReply;
public final class GetEntitiesBackendRequest extends OwnerSupervisorRequest<GetEntitiesBackendReply> {
private static final long serialVersionUID = 1L;
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.typed.ActorRef;
-import akka.pattern.StatusReply;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.pattern.StatusReply;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.controller.entity.owners.norev.EntityId;
public final class GetEntityBackendRequest extends AbstractEntityRequest<GetEntityBackendReply> {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.typed.ActorRef;
-import akka.pattern.StatusReply;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.pattern.StatusReply;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.controller.entity.owners.norev.EntityId;
public final class GetEntityOwnerBackendRequest extends AbstractEntityRequest<GetEntityOwnerBackendReply> {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
import static java.util.Objects.requireNonNull;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.NonNull;
public final class InitialOwnerSync extends OwnerSupervisorCommand {
import static java.util.Objects.requireNonNull;
-import akka.actor.Address;
import com.google.common.base.MoreObjects;
import java.util.Set;
+import org.apache.pekko.actor.Address;
import org.eclipse.jdt.annotation.NonNull;
public abstract class InternalClusterEvent extends OwnerSupervisorCommand {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.Address;
import java.util.Set;
+import org.apache.pekko.actor.Address;
public final class MemberDownEvent extends InternalClusterEvent {
public MemberDownEvent(final Address address, final Set<String> roles) {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.Address;
import java.util.Set;
+import org.apache.pekko.actor.Address;
public final class MemberReachableEvent extends InternalClusterEvent {
public MemberReachableEvent(final Address address, final Set<String> roles) {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.Address;
import java.util.Set;
+import org.apache.pekko.actor.Address;
public final class MemberUnreachableEvent extends InternalClusterEvent {
public MemberUnreachableEvent(final Address address, final Set<String> roles) {
*/
package org.opendaylight.controller.eos.akka.owner.supervisor.command;
-import akka.actor.Address;
import java.util.Set;
+import org.apache.pekko.actor.Address;
public final class MemberUpEvent extends InternalClusterEvent {
public MemberUpEvent(final Address address, final Set<String> roles) {
import static java.util.Objects.requireNonNull;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.typed.javadsl.Replicator.UpdateResponse;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.UpdateResponse;
import org.eclipse.jdt.annotation.NonNull;
public final class OwnerChanged extends OwnerSupervisorCommand {
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.pattern.StatusReply;
import java.io.Serializable;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.pattern.StatusReply;
import org.eclipse.jdt.annotation.NonNull;
public abstract class OwnerSupervisorRequest<T extends OwnerSupervisorReply> extends OwnerSupervisorCommand
*/
package org.opendaylight.controller.eos.akka.registry.candidate;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.Cluster;
-import akka.cluster.ddata.Key;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORMapKey;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.SelfUniqueAddress;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import java.util.Set;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ddata.Key;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORMapKey;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.SelfUniqueAddress;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import org.opendaylight.controller.eos.akka.registry.candidate.command.CandidateRegistryCommand;
import org.opendaylight.controller.eos.akka.registry.candidate.command.InternalUpdateResponse;
import org.opendaylight.controller.eos.akka.registry.candidate.command.RegisterCandidate;
*/
package org.opendaylight.controller.eos.akka.registry.candidate;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.actor.typed.javadsl.StashBuffer;
-import akka.cluster.Cluster;
import java.time.Duration;
import java.util.Set;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.actor.typed.javadsl.StashBuffer;
+import org.apache.pekko.cluster.Cluster;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesForMember;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesResponse;
import org.opendaylight.controller.eos.akka.owner.supervisor.command.OwnerSupervisorCommand;
import static java.util.Objects.requireNonNull;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.Replicator.UpdateResponse;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.UpdateResponse;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
*/
package org.opendaylight.controller.eos.akka.registry.listener.owner;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.LWWRegisterKey;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import java.time.Duration;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.LWWRegisterKey;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import org.opendaylight.controller.eos.akka.registry.listener.owner.command.InitialOwnerSync;
import org.opendaylight.controller.eos.akka.registry.listener.owner.command.ListenerCommand;
import org.opendaylight.controller.eos.akka.registry.listener.owner.command.OwnerChanged;
import static java.util.Objects.requireNonNull;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.typed.javadsl.Replicator.GetResponse;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.GetResponse;
import org.eclipse.jdt.annotation.NonNull;
public final class InitialOwnerSync extends ListenerCommand {
import static java.util.Objects.requireNonNull;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.typed.javadsl.Replicator.SubscribeResponse;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.SubscribeResponse;
import org.eclipse.jdt.annotation.NonNull;
/**
*/
package org.opendaylight.controller.eos.akka.registry.listener.type;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator.Changed;
-import akka.cluster.ddata.typed.javadsl.Replicator.SubscribeResponse;
-import akka.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Sets;
import java.time.Duration;
import java.util.Set;
import java.util.UUID;
import java.util.stream.Collectors;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.Changed;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.SubscribeResponse;
+import org.apache.pekko.cluster.ddata.typed.javadsl.ReplicatorMessageAdapter;
import org.opendaylight.controller.eos.akka.registry.candidate.CandidateRegistry;
import org.opendaylight.controller.eos.akka.registry.listener.owner.SingleEntityListenerActor;
import org.opendaylight.controller.eos.akka.registry.listener.owner.command.ListenerCommand;
import static java.util.Objects.requireNonNull;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
import org.opendaylight.controller.eos.akka.registry.listener.type.command.RegisterListener;
import org.opendaylight.controller.eos.akka.registry.listener.type.command.TerminateListener;
import org.opendaylight.controller.eos.akka.registry.listener.type.command.TypeListenerCommand;
import static java.util.Objects.requireNonNull;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.Replicator.SubscribeResponse;
import com.google.common.base.MoreObjects;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator.SubscribeResponse;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.mdsal.eos.dom.api.DOMEntity;
import static org.awaitility.Awaitility.await;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.Adapter;
-import akka.actor.typed.javadsl.AskPattern;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.cluster.ddata.LWWRegister;
-import akka.cluster.ddata.LWWRegisterKey;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import java.time.Duration;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.Adapter;
+import org.apache.pekko.actor.typed.javadsl.AskPattern;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.cluster.ddata.LWWRegister;
+import org.apache.pekko.cluster.ddata.LWWRegisterKey;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
import org.opendaylight.controller.eos.akka.bootstrap.EOSMain;
import org.opendaylight.controller.eos.akka.bootstrap.command.BootstrapCommand;
import org.opendaylight.controller.eos.akka.bootstrap.command.GetRunningContext;
protected static final String DEFAULT_DATACENTER = "dc-default";
protected static final List<String> TWO_NODE_SEED_NODES =
- List.of("akka://ClusterSystem@127.0.0.1:2550",
- "akka://ClusterSystem@127.0.0.1:2551");
+ List.of("pekko://ClusterSystem@127.0.0.1:2550",
+ "pekko://ClusterSystem@127.0.0.1:2551");
protected static final List<String> THREE_NODE_SEED_NODES =
- List.of("akka://ClusterSystem@127.0.0.1:2550",
- "akka://ClusterSystem@127.0.0.1:2551",
- "akka://ClusterSystem@127.0.0.1:2552");
+ List.of("pekko://ClusterSystem@127.0.0.1:2550",
+ "pekko://ClusterSystem@127.0.0.1:2551",
+ "pekko://ClusterSystem@127.0.0.1:2552");
protected static final List<String> DATACENTER_SEED_NODES =
- List.of("akka://ClusterSystem@127.0.0.1:2550",
- "akka://ClusterSystem@127.0.0.1:2551",
- "akka://ClusterSystem@127.0.0.1:2552",
- "akka://ClusterSystem@127.0.0.1:2553");
+ List.of("pekko://ClusterSystem@127.0.0.1:2550",
+ "pekko://ClusterSystem@127.0.0.1:2551",
+ "pekko://ClusterSystem@127.0.0.1:2552",
+ "pekko://ClusterSystem@127.0.0.1:2553");
protected static final BindingDOMCodecServices CODEC_CONTEXT =
new DefaultBindingDOMCodecFactory().createBindingDOMCodec(BindingRuntimeHelpers.createRuntimeContext());
- private static final String REMOTE_PROTOCOL = "akka";
- private static final String PORT_PARAM = "akka.remote.artery.canonical.port";
- private static final String ROLE_PARAM = "akka.cluster.roles";
- private static final String SEED_NODES_PARAM = "akka.cluster.seed-nodes";
- private static final String DATA_CENTER_PARAM = "akka.cluster.multi-data-center.self-data-center";
+ private static final String REMOTE_PROTOCOL = "pekko";
+ private static final String PORT_PARAM = "pekko.remote.artery.canonical.port";
+ private static final String ROLE_PARAM = "pekko.cluster.roles";
+ private static final String SEED_NODES_PARAM = "pekko.cluster.seed-nodes";
+ private static final String DATA_CENTER_PARAM = "pekko.cluster.multi-data-center.self-data-center";
protected static MockNativeEntityOwnershipService startupNativeService(final int port, final List<String> roles,
final List<String> seedNodes)
final Config config = ConfigFactory.parseMap(overrides)
.withFallback(ConfigFactory.load());
- // Create a classic Akka system since thats what we will have in osgi
- final akka.actor.ActorSystem system = akka.actor.ActorSystem.create("ClusterSystem", config);
+ // Create a classic Pekko system since thats what we will have in osgi
+ final var system = org.apache.pekko.actor.ActorSystem.create("ClusterSystem", config);
return new MockNativeEntityOwnershipService(system);
}
final Config config = ConfigFactory.parseMap(overrides).withFallback(ConfigFactory.load());
- // Create a classic Akka system since thats what we will have in osgi
- final akka.actor.ActorSystem system = akka.actor.ActorSystem.create("ClusterSystem", config);
+ // Create a classic Pekko system since thats what we will have in osgi
+ final var system = org.apache.pekko.actor.ActorSystem.create("ClusterSystem", config);
final ActorRef<BootstrapCommand> eosBootstrap =
Adapter.spawn(system, bootstrap.get(), "EOSBootstrap");
protected static ClusterNode startupWithDatacenter(final int port, final List<String> roles,
final List<String> seedNodes, final String dataCenter)
throws ExecutionException, InterruptedException {
- final akka.actor.ActorSystem system = startupActorSystem(port, roles, seedNodes, dataCenter);
+ final org.apache.pekko.actor.ActorSystem system = startupActorSystem(port, roles, seedNodes, dataCenter);
final ActorRef<BootstrapCommand> eosBootstrap =
Adapter.spawn(system, EOSMain.create(CODEC_CONTEXT.getInstanceIdentifierCodec()), "EOSBootstrap");
runningContext.getCandidateRegistry(), runningContext.getOwnerSupervisor());
}
- protected static akka.actor.ActorSystem startupActorSystem(final int port, final List<String> roles,
+ protected static org.apache.pekko.actor.ActorSystem startupActorSystem(final int port, final List<String> roles,
final List<String> seedNodes) {
final Map<String, Object> overrides = new HashMap<>();
overrides.put(PORT_PARAM, port);
final Config config = ConfigFactory.parseMap(overrides)
.withFallback(ConfigFactory.load());
- // Create a classic Akka system since thats what we will have in osgi
- return akka.actor.ActorSystem.create("ClusterSystem", config);
+ // Create a classic Pekko system since thats what we will have in osgi
+ return org.apache.pekko.actor.ActorSystem.create("ClusterSystem", config);
}
- protected static akka.actor.ActorSystem startupActorSystem(final int port, final List<String> roles,
+ protected static org.apache.pekko.actor.ActorSystem startupActorSystem(final int port, final List<String> roles,
final List<String> seedNodes, final String dataCenter) {
final Map<String, Object> overrides = new HashMap<>();
overrides.put(PORT_PARAM, port);
final Config config = ConfigFactory.parseMap(overrides)
.withFallback(ConfigFactory.load());
- // Create a classic Akka system since thats what we will have in osgi
- return akka.actor.ActorSystem.create("ClusterSystem", config);
+ // Create a classic Pekko system since thats what we will have in osgi
+ return org.apache.pekko.actor.ActorSystem.create("ClusterSystem", config);
}
private static Behavior<BootstrapCommand> rootBehavior() {
protected static final class ClusterNode {
private final int port;
private final List<String> roles;
- private final akka.actor.typed.ActorSystem<Void> actorSystem;
+ private final org.apache.pekko.actor.typed.ActorSystem<Void> actorSystem;
private final ActorRef<BootstrapCommand> eosBootstrap;
private final ActorRef<TypeListenerRegistryCommand> listenerRegistry;
private final ActorRef<CandidateRegistryCommand> candidateRegistry;
return port;
}
- public akka.actor.typed.ActorSystem<Void> getActorSystem() {
+ public org.apache.pekko.actor.typed.ActorSystem<Void> getActorSystem() {
return actorSystem;
}
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
-import akka.actor.ActorSystem;
-import akka.actor.testkit.typed.javadsl.ActorTestKit;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.javadsl.Adapter;
-import akka.actor.typed.javadsl.AskPattern;
-import akka.cluster.ddata.ORMap;
-import akka.cluster.ddata.ORSet;
-import akka.cluster.ddata.typed.javadsl.DistributedData;
-import akka.cluster.ddata.typed.javadsl.Replicator;
import com.typesafe.config.ConfigFactory;
import java.time.Duration;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutionException;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.testkit.typed.javadsl.ActorTestKit;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.javadsl.Adapter;
+import org.apache.pekko.actor.typed.javadsl.AskPattern;
+import org.apache.pekko.cluster.ddata.ORMap;
+import org.apache.pekko.cluster.ddata.ORSet;
+import org.apache.pekko.cluster.ddata.typed.javadsl.DistributedData;
+import org.apache.pekko.cluster.ddata.typed.javadsl.Replicator;
import org.awaitility.Durations;
import org.junit.After;
import org.junit.Before;
static final QName QNAME = QName.create("test", "2015-08-11", "foo");
private ActorSystem system;
- private akka.actor.typed.ActorSystem<Void> typedSystem;
+ private org.apache.pekko.actor.typed.ActorSystem<Void> typedSystem;
private AkkaEntityOwnershipService service;
private ActorRef<Replicator.Command> replicator;
*/
package org.opendaylight.controller.eos.akka;
-import akka.actor.testkit.typed.javadsl.ActorTestKit;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
-import akka.cluster.typed.Cluster;
import java.time.Duration;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
+import org.apache.pekko.actor.testkit.typed.javadsl.ActorTestKit;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
+import org.apache.pekko.cluster.typed.Cluster;
import org.awaitility.Awaitility;
import org.junit.After;
import org.junit.Before;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorSystem;
-import akka.actor.testkit.typed.javadsl.ActorTestKit;
-import akka.actor.typed.javadsl.Adapter;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
-import akka.cluster.typed.Cluster;
import java.time.Duration;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.testkit.typed.javadsl.ActorTestKit;
+import org.apache.pekko.actor.typed.javadsl.Adapter;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
+import org.apache.pekko.cluster.typed.Cluster;
import org.awaitility.Awaitility;
import org.junit.After;
import org.junit.Before;
*/
package org.opendaylight.controller.eos.akka;
-import akka.actor.testkit.typed.javadsl.ActorTestKit;
import java.time.Duration;
import java.util.List;
+import org.apache.pekko.actor.testkit.typed.javadsl.ActorTestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
*/
package org.opendaylight.controller.eos.akka;
-import akka.actor.testkit.typed.javadsl.ActorTestKit;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
-import akka.cluster.typed.Cluster;
import com.google.common.collect.ImmutableList;
import java.time.Duration;
import java.util.List;
+import org.apache.pekko.actor.testkit.typed.javadsl.ActorTestKit;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
+import org.apache.pekko.cluster.typed.Cluster;
import org.awaitility.Awaitility;
import org.junit.After;
import org.junit.Before;
import static org.awaitility.Awaitility.await;
-import akka.actor.testkit.typed.javadsl.ActorTestKit;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
-import akka.cluster.typed.Cluster;
import com.google.common.collect.ImmutableList;
import java.time.Duration;
import java.util.List;
+import org.apache.pekko.actor.testkit.typed.javadsl.ActorTestKit;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
+import org.apache.pekko.cluster.typed.Cluster;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
*/
package org.opendaylight.controller.eos.akka.owner.supervisor;
-import akka.actor.testkit.typed.javadsl.ActorTestKit;
-import akka.actor.typed.ActorRef;
-import akka.actor.typed.Behavior;
-import akka.actor.typed.javadsl.AbstractBehavior;
-import akka.actor.typed.javadsl.ActorContext;
-import akka.actor.typed.javadsl.Behaviors;
-import akka.actor.typed.javadsl.Receive;
-import akka.cluster.typed.Cluster;
-import akka.cluster.typed.ClusterSingleton;
-import akka.cluster.typed.SingletonActor;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
+import org.apache.pekko.actor.testkit.typed.javadsl.ActorTestKit;
+import org.apache.pekko.actor.typed.ActorRef;
+import org.apache.pekko.actor.typed.Behavior;
+import org.apache.pekko.actor.typed.javadsl.AbstractBehavior;
+import org.apache.pekko.actor.typed.javadsl.ActorContext;
+import org.apache.pekko.actor.typed.javadsl.Behaviors;
+import org.apache.pekko.actor.typed.javadsl.Receive;
+import org.apache.pekko.cluster.typed.Cluster;
+import org.apache.pekko.cluster.typed.ClusterSingleton;
+import org.apache.pekko.cluster.typed.SingletonActor;
import org.junit.Test;
import org.opendaylight.controller.eos.akka.AbstractNativeEosTest;
import org.opendaylight.controller.eos.akka.bootstrap.command.BootstrapCommand;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
-import akka.actor.testkit.typed.javadsl.ActorTestKit;
-import akka.actor.typed.javadsl.Adapter;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
-import akka.cluster.typed.Cluster;
-import akka.testkit.javadsl.TestKit;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
import java.time.Duration;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.testkit.typed.javadsl.ActorTestKit;
+import org.apache.pekko.actor.typed.javadsl.Adapter;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
+import org.apache.pekko.cluster.typed.Cluster;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.awaitility.Awaitility;
import org.junit.After;
import org.junit.Before;
-akka {
+pekko {
loglevel = debug
actor {
warn-about-java-serializer-usage = off
}
cluster {
seed-nodes = [
- "akka://ClusterSystem@127.0.0.1:2550"]
+ "pekko://ClusterSystem@127.0.0.1:2550"]
roles = [
"member-1"
]
- downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
+ downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
distributed-data {
# How often the Replicator should send out gossip information.
*/
package org.opendaylight.controller.cluster.example;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
import org.opendaylight.controller.cluster.example.messages.KeyValue;
import org.opendaylight.controller.cluster.example.messages.KeyValueSaved;
import org.slf4j.Logger;
*/
package org.opendaylight.controller.cluster.example;
-import akka.actor.ActorRef;
-import akka.actor.Props;
import com.google.common.io.ByteSource;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Map;
import java.util.Optional;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.example.messages.KeyValue;
import org.opendaylight.controller.cluster.example.messages.KeyValueSaved;
import org.opendaylight.controller.cluster.example.messages.PrintRole;
*/
package org.opendaylight.controller.cluster.example;
-import akka.actor.ActorRef;
-import akka.actor.Cancellable;
-import akka.actor.Props;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Cancellable;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActor;
import org.opendaylight.controller.cluster.example.messages.RegisterListener;
import org.opendaylight.controller.cluster.notifications.RegisterRoleChangeListener;
*/
public class ExampleRoleChangeListener extends AbstractUntypedActor implements AutoCloseable {
// the akka url should be set to the notifiers actor-system and domain.
- private static final String NOTIFIER_AKKA_URL = "akka://raft-test@127.0.0.1:2550/user/";
+ private static final String NOTIFIER_AKKA_URL = "pekko://raft-test@127.0.0.1:2550/user/";
private final Map<String, Boolean> notifierRegistrationStatus = new HashMap<>();
private Cancellable registrationSchedule = null;
*/
package org.opendaylight.controller.cluster.example;
-import akka.actor.ActorRef;
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.example.messages.KeyValue;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
*/
package org.opendaylight.controller.cluster.example;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.PoisonPill;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.nio.charset.Charset;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.PoisonPill;
import org.opendaylight.controller.cluster.example.messages.KeyValue;
public final class Main {
private static Map<String, String> allPeers = new HashMap<>();
static {
- allPeers.put("example-1", "akka://default/user/example-1");
- allPeers.put("example-2", "akka://default/user/example-2");
- allPeers.put("example-3", "akka://default/user/example-3");
+ allPeers.put("example-1", "pekko://default/user/example-1");
+ allPeers.put("example-2", "pekko://default/user/example-2");
+ allPeers.put("example-3", "pekko://default/user/example-3");
}
private Main() {
*/
package org.opendaylight.controller.cluster.example;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
import com.google.common.collect.Lists;
import com.typesafe.config.ConfigFactory;
import java.io.BufferedReader;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
import org.opendaylight.controller.cluster.example.messages.PrintRole;
import org.opendaylight.controller.cluster.example.messages.PrintState;
import org.opendaylight.controller.cluster.raft.ConfigParams;
public void createNodes(final int num) {
for (int i = 0; i < num; i++) {
nameCounter = nameCounter + 1;
- allPeers.put("example-" + nameCounter, "akka://raft-test/user/example-" + nameCounter);
+ allPeers.put("example-" + nameCounter, "pekko://raft-test/user/example-" + nameCounter);
}
for (String s : allPeers.keySet()) {
}
public void reinstateNode(final String actorName) {
- String address = "akka://default/user/" + actorName;
+ String address = "pekko://default/user/" + actorName;
allPeers.put(actorName, address);
ActorRef exampleActor = createExampleActor(actorName);
-akka {
+pekko {
loglevel = "DEBUG"
}
raft-test {
- akka {
+ pekko {
loglevel = "DEBUG"
# enable to test serialization only.
# serialize-messages = on
- provider = "akka.remote.RemoteActorRefProvider"
+ plugin-dispatcher = "org.apache.pekko.persistence.dispatchersRemoteActorRefProvider"
}
remote {
raft-test-listener {
- akka {
+ pekko {
loglevel = "DEBUG"
actor {
- provider = "akka.remote.RemoteActorRefProvider"
+ plugin-dispatcher = "org.apache.pekko.persistence.dispatchersRemoteActorRefProvider"
}
remote {
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<!-- Test Dependencies -->
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<dependency>
<groupId>org.awaitility</groupId>
*/
package org.opendaylight.controller.cluster.raft;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.yangtools.concepts.Identifier;
/**
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
-import akka.actor.ReceiveTimeout;
-import akka.actor.UntypedAbstractActor;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.ReceiveTimeout;
+import org.apache.pekko.actor.UntypedAbstractActor;
import org.opendaylight.controller.cluster.raft.base.messages.CaptureSnapshot;
import org.opendaylight.controller.cluster.raft.base.messages.CaptureSnapshotReply;
import org.opendaylight.controller.cluster.raft.client.messages.GetSnapshotReply;
LOG.warn("{}: Got ReceiveTimeout for inactivity - did not receive CaptureSnapshotReply within {} ms",
params.id, params.receiveTimeout.toMillis());
- params.replyToActor.tell(new akka.actor.Status.Failure(new TimeoutException(String.format(
+ params.replyToActor.tell(new org.apache.pekko.actor.Status.Failure(new TimeoutException(String.format(
"Timed out after %d ms while waiting for CaptureSnapshotReply",
params.receiveTimeout.toMillis()))), getSelf());
getSelf().tell(PoisonPill.getInstance(), getSelf());
*/
package org.opendaylight.controller.cluster.raft;
-import akka.japi.Procedure;
+import org.apache.pekko.japi.Procedure;
/**
* An akka Procedure that does nothing.
*/
package org.opendaylight.controller.cluster.raft;
-import akka.actor.ActorRef;
import com.google.common.io.ByteSource;
import java.io.OutputStream;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.raft.persisted.EmptyState;
import org.opendaylight.controller.cluster.raft.persisted.Snapshot.State;
import static com.google.common.base.Verify.verify;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.PoisonPill;
-import akka.actor.Status;
-import akka.persistence.JournalProtocol;
-import akka.persistence.SnapshotProtocol;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
import org.apache.commons.lang3.time.DurationFormatUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.persistence.JournalProtocol;
+import org.apache.pekko.persistence.SnapshotProtocol;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.DataPersistenceProvider;
*/
package org.opendaylight.controller.cluster.raft;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.cluster.Cluster;
import com.google.common.annotations.VisibleForTesting;
import java.util.Collection;
import java.util.Optional;
import java.util.concurrent.Executor;
import java.util.function.Consumer;
import java.util.function.LongSupplier;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.cluster.Cluster;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.DataPersistenceProvider;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorContext;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.cluster.Cluster;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import java.util.Collection;
import java.util.concurrent.Executor;
import java.util.function.Consumer;
import java.util.function.LongSupplier;
+import org.apache.pekko.actor.ActorContext;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.cluster.Cluster;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.DataPersistenceProvider;
import org.opendaylight.controller.cluster.io.FileBackedOutputStreamFactory;
import static java.util.Objects.requireNonNull;
-import akka.japi.Procedure;
+import org.apache.pekko.japi.Procedure;
import org.opendaylight.controller.cluster.DataPersistenceProvider;
import org.opendaylight.controller.cluster.DelegatingPersistentDataProvider;
import org.opendaylight.controller.cluster.PersistentDataProvider;
*/
package org.opendaylight.controller.cluster.raft;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.Cancellable;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Stopwatch;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Cancellable;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.raft.base.messages.LeaderTransitioning;
import org.opendaylight.controller.cluster.raft.behaviors.Leader;
*/
package org.opendaylight.controller.cluster.raft;
-import akka.persistence.RecoveryCompleted;
-import akka.persistence.SnapshotOffer;
import com.google.common.base.Stopwatch;
import java.util.Collections;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.persistence.RecoveryCompleted;
+import org.apache.pekko.persistence.SnapshotOffer;
import org.opendaylight.controller.cluster.PersistentDataProvider;
import org.opendaylight.controller.cluster.raft.base.messages.ApplySnapshot;
import org.opendaylight.controller.cluster.raft.messages.PersistentPayload;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.Cancellable;
import com.google.common.collect.ImmutableList;
import java.util.ArrayDeque;
import java.util.Collection;
import java.util.Map;
import java.util.Queue;
import java.util.UUID;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Cancellable;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.raft.base.messages.ApplyState;
import org.opendaylight.controller.cluster.raft.base.messages.SnapshotComplete;
*/
package org.opendaylight.controller.cluster.raft;
-import akka.actor.ActorRef;
import com.google.common.io.ByteSource;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.raft.persisted.Snapshot.State;
*/
package org.opendaylight.controller.cluster.raft;
-import akka.actor.ActorRef;
-import akka.persistence.SaveSnapshotFailure;
-import akka.persistence.SaveSnapshotSuccess;
-import akka.util.Timeout;
import com.google.common.annotations.VisibleForTesting;
import java.util.Collections;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.persistence.SaveSnapshotFailure;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.raft.base.messages.ApplySnapshot;
import org.opendaylight.controller.cluster.raft.base.messages.CaptureSnapshot;
import org.opendaylight.controller.cluster.raft.base.messages.CaptureSnapshotReply;
*/
package org.opendaylight.controller.cluster.raft;
-import akka.persistence.SnapshotSelectionCriteria;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.io.ByteSource;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.io.FileBackedOutputStream;
import static java.util.Objects.requireNonNull;
-import akka.actor.Cancellable;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.pekko.actor.Cancellable;
import scala.concurrent.duration.FiniteDuration;
/**
import static java.util.Objects.requireNonNull;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.ControlMessage;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.raft.persisted.Snapshot;
package org.opendaylight.controller.cluster.raft.base.messages;
-import akka.actor.ActorRef;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.dispatch.ControlMessage;
import org.opendaylight.controller.cluster.raft.ReplicatedLogEntry;
import org.opendaylight.yangtools.concepts.Identifier;
package org.opendaylight.controller.cluster.raft.base.messages;
-import akka.dispatch.ControlMessage;
import java.util.Collections;
import java.util.List;
+import org.apache.pekko.dispatch.ControlMessage;
import org.opendaylight.controller.cluster.raft.ReplicatedLogEntry;
public class CaptureSnapshot implements ControlMessage {
import static java.util.Objects.requireNonNull;
-import akka.dispatch.ControlMessage;
import java.io.OutputStream;
import java.util.Optional;
+import org.apache.pekko.dispatch.ControlMessage;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.raft.persisted.Snapshot;
package org.opendaylight.controller.cluster.raft.base.messages;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* Local message sent to indicate the current election term has timed out.
*/
package org.opendaylight.controller.cluster.raft.base.messages;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.yangtools.concepts.Identifier;
public record Replicate(long logIndex, boolean sendImmediate, ActorRef clientActor, Identifier identifier) {
package org.opendaylight.controller.cluster.raft.base.messages;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* This messages is sent via a schedule to the Leader to prompt it to send a heart beat to its followers.
*/
package org.opendaylight.controller.cluster.raft.base.messages;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* Internal message sent when a snapshot capture is complete.
*/
package org.opendaylight.controller.cluster.raft.base.messages;
-import akka.dispatch.ControlMessage;
import java.io.Serializable;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* Message sent to a follower to force an immediate election time out.
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.Cancellable;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.io.ByteSource;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.OptionalInt;
import java.util.Queue;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Cancellable;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.io.SharedFileBackedOutputStream;
import org.opendaylight.controller.cluster.messaging.MessageSlicer;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Cancellable;
-import akka.cluster.Cluster;
-import akka.cluster.Member;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Cancellable;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.Member;
import org.opendaylight.controller.cluster.raft.RaftActorContext;
import org.opendaylight.controller.cluster.raft.RaftState;
import org.opendaylight.controller.cluster.raft.ReplicatedLogEntry;
*/
package org.opendaylight.controller.cluster.raft.behaviors;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
import com.google.common.collect.ImmutableList;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
import org.opendaylight.controller.cluster.raft.PeerInfo;
import org.opendaylight.controller.cluster.raft.RaftActorContext;
import org.opendaylight.controller.cluster.raft.RaftState;
*/
package org.opendaylight.controller.cluster.raft.behaviors;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.Address;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterEvent.CurrentClusterState;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Stopwatch;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterEvent.CurrentClusterState;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.messaging.MessageAssembler;
import org.opendaylight.controller.cluster.raft.RaftActorContext;
*/
package org.opendaylight.controller.cluster.raft.behaviors;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.raft.RaftActorContext;
import org.opendaylight.controller.cluster.raft.RaftState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Stopwatch;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.raft.FollowerLogInformation;
*/
package org.opendaylight.controller.cluster.raft.behaviors;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.raft.RaftActorContext;
import org.opendaylight.controller.cluster.raft.RaftState;
import org.opendaylight.controller.cluster.raft.base.messages.ApplyState;
*/
package org.opendaylight.controller.cluster.raft.behaviors;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.raft.RaftState;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.raft.base.messages.FollowerInitialSyncUpStatus;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
*/
package org.opendaylight.controller.cluster.raft.client.messages;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* Local message sent to a RaftActor to obtain a snapshot of statistical information. Returns an
*/
package org.opendaylight.controller.cluster.raft.client.messages;
-import akka.util.Timeout;
import java.util.Optional;
+import org.apache.pekko.util.Timeout;
/**
* Internal client message to get a snapshot of the current state based on whether or not persistence is enabled.
*/
package org.opendaylight.controller.cluster.raft.client.messages;
-import akka.dispatch.ControlMessage;
import java.io.Serializable;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* Message sent to a raft actor to shutdown gracefully. If it's the leader it will transfer leadership to a
package org.opendaylight.controller.cluster.raft.messages;
-import akka.dispatch.ControlMessage;
import java.io.Serializable;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* Interface implemented by all requests exchanged in the Raft protocol.
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import java.io.Serializable;
+import org.apache.pekko.actor.ActorRef;
/**
* Message sent to leader to transfer leadership to a particular follower.
*/
package org.opendaylight.controller.cluster.raft.messages;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* Local message sent to self on receiving the InstallSnapshotReply from a follower indicating that
*/
package org.opendaylight.controller.cluster.raft.persisted;
-import akka.dispatch.ControlMessage;
import java.io.Serializable;
+import org.apache.pekko.dispatch.ControlMessage;
/**
* This is an internal message that is stored in the akka's persistent journal. During recovery, this
*/
package org.opendaylight.controller.cluster.raft.persisted;
-import akka.dispatch.ControlMessage;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.dispatch.ControlMessage;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.raft.messages.Payload;
import static java.util.Objects.requireNonNull;
-import akka.actor.ExtendedActorSystem;
-import akka.serialization.JSerializer;
-import akka.util.ClassLoaderObjectInputStream;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.serialization.JSerializer;
+import org.apache.pekko.util.ClassLoaderObjectInputStream;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
package org.opendaylight.controller.cluster.raft;
-import akka.actor.ActorSystem;
-import akka.testkit.javadsl.TestKit;
import java.io.File;
import java.io.IOException;
import org.apache.commons.io.FileUtils;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.opendaylight.yangtools.util.AbstractStringIdentifier;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
-import akka.actor.ActorRef;
-import akka.actor.InvalidActorNameException;
-import akka.actor.PoisonPill;
-import akka.actor.Terminated;
-import akka.dispatch.Dispatchers;
-import akka.dispatch.Mailboxes;
-import akka.pattern.Patterns;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.base.Stopwatch;
import com.google.common.util.concurrent.Uninterruptibles;
import java.io.OutputStream;
import java.util.function.Consumer;
import java.util.function.Predicate;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.InvalidActorNameException;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.dispatch.Mailboxes;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.junit.After;
import org.opendaylight.controller.cluster.raft.MockRaftActor.MockSnapshotState;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext.MockPayload;
import static org.junit.Assert.assertEquals;
import static org.mockito.Mockito.verify;
-import akka.japi.Procedure;
+import org.apache.pekko.japi.Procedure;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.ArgumentCaptor;
import static org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor.expectMatching;
import static org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor.getAllMatching;
-import akka.actor.ActorRef;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import java.util.List;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.notifications.RoleChanged;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext.MockPayload;
import static org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor.expectFirstMatching;
import static org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor.expectMatching;
-import akka.actor.ActorRef;
-import akka.actor.Status;
-import akka.pattern.Patterns;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Test;
import org.opendaylight.controller.cluster.notifications.LeaderStateChanged;
import org.opendaylight.controller.cluster.raft.base.messages.ApplyState;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
-import akka.dispatch.Dispatchers;
-import akka.testkit.TestActorRef;
import com.google.common.collect.ImmutableMap;
import com.google.common.io.ByteSource;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.testkit.TestActorRef;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorRef;
-import akka.actor.Props;
import com.google.common.io.ByteSource;
import com.google.common.util.concurrent.Uninterruptibles;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.DataPersistenceProvider;
import org.opendaylight.controller.cluster.raft.behaviors.RaftActorBehavior;
import org.opendaylight.controller.cluster.raft.messages.Payload;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
import com.google.common.io.ByteSource;
import com.google.common.util.concurrent.MoreExecutors;
import java.io.IOException;
import java.util.Objects;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.DataPersistenceProvider;
import org.opendaylight.controller.cluster.NonPersistentDataProvider;
import org.opendaylight.controller.cluster.raft.behaviors.RaftActorBehavior;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.notifications.LeaderStateChanged;
import org.opendaylight.controller.cluster.raft.AbstractRaftActorIntegrationTest.TestRaftActor.Builder;
import static org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor.expectFirstMatching;
import static org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor.expectMatching;
-import akka.actor.ActorRef;
import com.google.common.collect.ImmutableMap;
import java.util.List;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.notifications.RoleChanged;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext.MockPayload;
import static org.mockito.Mockito.reset;
import static org.mockito.Mockito.verify;
-import akka.actor.Props;
-import akka.testkit.TestActorRef;
import com.google.common.util.concurrent.MoreExecutors;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.testkit.TestActorRef;
import org.junit.After;
import org.junit.Test;
import org.opendaylight.controller.cluster.DataPersistenceProvider;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.verify;
-import akka.japi.Procedure;
+import org.apache.pekko.japi.Procedure;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import static org.mockito.Mockito.timeout;
import static org.mockito.Mockito.verify;
-import akka.dispatch.Dispatchers;
import java.util.function.Function;
+import org.apache.pekko.dispatch.Dispatchers;
import org.junit.After;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.RaftActorLeadershipTransferCohort.OnComplete;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.persistence.RecoveryCompleted;
-import akka.persistence.SnapshotMetadata;
-import akka.persistence.SnapshotOffer;
-import akka.testkit.javadsl.TestKit;
import com.google.common.util.concurrent.MoreExecutors;
import java.io.OutputStream;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.persistence.RecoveryCompleted;
+import org.apache.pekko.persistence.SnapshotMetadata;
+import org.apache.pekko.persistence.SnapshotOffer;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import static org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor.expectFirstMatching;
import static org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor.expectMatching;
-import akka.actor.AbstractActor;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.dispatch.Dispatchers;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Stopwatch;
import com.google.common.io.ByteSource;
import com.google.common.util.concurrent.MoreExecutors;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.AbstractActor;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.persistence.SaveSnapshotFailure;
-import akka.persistence.SaveSnapshotSuccess;
-import akka.persistence.SnapshotMetadata;
import com.google.common.util.concurrent.MoreExecutors;
import java.io.OutputStream;
import java.util.List;
import java.util.Map;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.persistence.SaveSnapshotFailure;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
+import org.apache.pekko.persistence.SnapshotMetadata;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.actor.Status.Failure;
-import akka.actor.Terminated;
-import akka.dispatch.Dispatchers;
-import akka.japi.Procedure;
-import akka.persistence.SaveSnapshotFailure;
-import akka.persistence.SaveSnapshotSuccess;
-import akka.persistence.SnapshotMetadata;
-import akka.persistence.SnapshotOffer;
-import akka.protobuf.ByteString;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.util.concurrent.Uninterruptibles;
import java.io.ByteArrayOutputStream;
import java.io.ObjectOutputStream;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.japi.Procedure;
+import org.apache.pekko.persistence.SaveSnapshotFailure;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
+import org.apache.pekko.persistence.SnapshotMetadata;
+import org.apache.pekko.persistence.SnapshotOffer;
+import org.apache.pekko.protobuf.ByteString;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
reset(mockRaftActor.snapshotCohortDelegate);
raftActorRef.tell(GetSnapshot.INSTANCE, kit.getRef());
- Failure failure = kit.expectMsgClass(akka.actor.Status.Failure.class);
+ Failure failure = kit.expectMsgClass(org.apache.pekko.actor.Status.Failure.class);
assertEquals("Failure cause type", TimeoutException.class, failure.cause().getClass());
mockRaftActor.getSnapshotMessageSupport().setSnapshotReplyActorTimeout(
import static org.junit.Assert.fail;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.pattern.Patterns;
-import akka.testkit.javadsl.EventFilter;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.javadsl.EventFilter;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.raft.client.messages.FindLeader;
import org.opendaylight.controller.cluster.raft.client.messages.FindLeaderReply;
import org.slf4j.Logger;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
-import akka.persistence.SaveSnapshotSuccess;
-import akka.testkit.TestActorRef;
import java.util.List;
import java.util.Map;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
+import org.apache.pekko.testkit.TestActorRef;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.persisted.ApplyJournalEntries;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
-import akka.persistence.SaveSnapshotSuccess;
import java.util.List;
import java.util.Map;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext.MockPayload;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
-import akka.japi.Procedure;
import com.google.common.util.concurrent.MoreExecutors;
import java.util.Map;
import java.util.function.Consumer;
+import org.apache.pekko.japi.Procedure;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import static org.junit.Assert.assertEquals;
-import akka.persistence.SaveSnapshotSuccess;
import java.util.List;
import java.util.Map;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext.MockPayload;
import org.opendaylight.controller.cluster.raft.base.messages.ApplyState;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.persistence.SaveSnapshotSuccess;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
import org.eclipse.jdt.annotation.Nullable;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext.MockPayload;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.persistence.SnapshotSelectionCriteria;
import java.io.OutputStream;
import java.util.List;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.assertTrue;
-import akka.actor.Actor;
-import akka.actor.ActorIdentity;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Identify;
-import akka.actor.InvalidActorNameException;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
-import akka.pattern.Patterns;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.base.Stopwatch;
import com.google.common.util.concurrent.Uninterruptibles;
import java.time.Duration;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.Actor;
+import org.apache.pekko.actor.ActorIdentity;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Identify;
+import org.apache.pekko.actor.InvalidActorNameException;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import scala.concurrent.Await;
}
public String createTestActorPath(final String actorId) {
- return "akka://test/user/" + actorId;
+ return "pekko://test/user/" + actorId;
}
@Override
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.actor.Status;
-import akka.dispatch.ControlMessage;
-import akka.dispatch.Dispatchers;
-import akka.dispatch.Mailboxes;
-import akka.pattern.Patterns;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.dispatch.Mailboxes;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.junit.After;
import org.junit.Before;
import org.opendaylight.controller.cluster.raft.DefaultConfigParamsImpl;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.testkit.TestActorRef;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.testkit.TestActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.DefaultConfigParamsImpl;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNull;
-import akka.actor.ActorRef;
-import akka.protobuf.ByteString;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectOutputStream;
import java.util.Collections;
import java.util.List;
import java.util.Map;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.protobuf.ByteString;
import org.junit.After;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.AbstractActorTest;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.dispatch.Dispatchers;
-import akka.testkit.TestActorRef;
import com.google.common.base.Stopwatch;
import com.google.common.util.concurrent.MoreExecutors;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.testkit.TestActorRef;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
import com.google.common.collect.ImmutableMap;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.DefaultConfigParamsImpl;
import org.opendaylight.controller.cluster.raft.RaftState;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.dispatch.Dispatchers;
-import akka.protobuf.ByteString;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Stopwatch;
import com.google.common.io.ByteSource;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.protobuf.ByteString;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.DefaultConfigParamsImpl;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
import java.util.HashMap;
import java.util.Map;
+import org.apache.pekko.actor.ActorRef;
import org.junit.After;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.DefaultConfigParamsImpl;
@Test
public void testHandleMessageWithThreeMembers() {
- String followerAddress1 = "akka://test/user/$a";
- String followerAddress2 = "akka://test/user/$b";
+ String followerAddress1 = "pekko://test/user/$a";
+ String followerAddress2 = "pekko://test/user/$b";
MockRaftActorContext leaderActorContext = createActorContext();
Map<String, String> peerAddresses = new HashMap<>();
@Test
public void testHandleMessageWithFiveMembers() {
- String followerAddress1 = "akka://test/user/$a";
- String followerAddress2 = "akka://test/user/$b";
- String followerAddress3 = "akka://test/user/$c";
- String followerAddress4 = "akka://test/user/$d";
+ String followerAddress1 = "pekko://test/user/$a";
+ String followerAddress2 = "pekko://test/user/$b";
+ String followerAddress3 = "pekko://test/user/$c";
+ String followerAddress4 = "pekko://test/user/$d";
final MockRaftActorContext leaderActorContext = createActorContext();
Map<String, String> peerAddresses = new HashMap<>();
@Test
public void testHandleMessageFromAnotherLeader() {
- String followerAddress1 = "akka://test/user/$a";
- String followerAddress2 = "akka://test/user/$b";
+ String followerAddress1 = "pekko://test/user/$a";
+ String followerAddress2 = "pekko://test/user/$b";
MockRaftActorContext leaderActorContext = createActorContext();
Map<String, String> peerAddresses = new HashMap<>();
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
-import akka.actor.Terminated;
-import akka.protobuf.ByteString;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.io.ByteSource;
import com.google.common.util.concurrent.Uninterruptibles;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.protobuf.ByteString;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Test;
import org.opendaylight.controller.cluster.messaging.MessageSlice;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
import com.google.common.collect.ImmutableMap;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.DefaultConfigParamsImpl;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext.MockPayload;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
import com.google.common.collect.ImmutableMap;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.DefaultConfigParamsImpl;
import org.opendaylight.controller.cluster.raft.RaftState;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.verify;
-import akka.protobuf.ByteString;
import com.google.common.io.ByteSource;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.OptionalInt;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.protobuf.ByteString;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.junit.After;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.AbstractActorTest;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
-import akka.actor.ExtendedActorSystem;
-import akka.testkit.javadsl.TestKit;
import java.io.NotSerializableException;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Test;
import org.opendaylight.controller.cluster.raft.MockRaftActorContext;
*/
package org.opendaylight.controller.cluster.raft.utils;
-import akka.actor.UntypedAbstractActor;
+import org.apache.pekko.actor.UntypedAbstractActor;
public class DoNothingActor extends UntypedAbstractActor {
@Override
*/
package org.opendaylight.controller.cluster.raft.utils;
-import akka.actor.UntypedAbstractActor;
+import org.apache.pekko.actor.UntypedAbstractActor;
/**
* The EchoActor simply responds back with the same message that it receives.
import static org.junit.Assert.assertTrue;
-import akka.actor.Props;
import java.util.ArrayList;
import java.util.List;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.raft.behaviors.RaftActorBehavior;
public class ForwardMessageToBehaviorActor extends MessageCollectorActor {
*/
package org.opendaylight.controller.cluster.raft.utils;
-import akka.dispatch.Futures;
-import akka.persistence.AtomicWrite;
-import akka.persistence.PersistentImpl;
-import akka.persistence.PersistentRepr;
-import akka.persistence.journal.japi.AsyncWriteJournal;
import com.google.common.util.concurrent.Uninterruptibles;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.persistence.AtomicWrite;
+import org.apache.pekko.persistence.PersistentImpl;
+import org.apache.pekko.persistence.PersistentRepr;
+import org.apache.pekko.persistence.journal.japi.AsyncWriteJournal;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import scala.Option;
public Future<Long> doAsyncReadHighestSequenceNr(final String persistenceId, final long fromSequenceNr) {
LOG.trace("doAsyncReadHighestSequenceNr for {}: fromSequenceNr: {}", persistenceId, fromSequenceNr);
- // Akka calls this during recovery.
+ // Pekko calls this during recovery.
Map<Long, Object> journal = JOURNALS.get(persistenceId);
if (journal == null) {
return Futures.successful(fromSequenceNr);
*/
package org.opendaylight.controller.cluster.raft.utils;
-import akka.dispatch.Futures;
-import akka.persistence.SelectedSnapshot;
-import akka.persistence.SnapshotMetadata;
-import akka.persistence.SnapshotSelectionCriteria;
-import akka.persistence.snapshot.japi.SnapshotStore;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.ArrayList;
import java.util.Collections;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.persistence.SelectedSnapshot;
+import org.apache.pekko.persistence.SnapshotMetadata;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.persistence.snapshot.japi.SnapshotStore;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import scala.concurrent.Future;
*/
package org.opendaylight.controller.cluster.raft.utils;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
-import akka.dispatch.ControlMessage;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.google.common.base.Predicate;
import com.google.common.base.Predicates;
import com.google.common.base.Throwables;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
+import org.apache.pekko.dispatch.ControlMessage;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.junit.Assert;
import scala.concurrent.Await;
import scala.concurrent.Future;
-akka {
+pekko {
persistence.snapshot-store.plugin = "mock-snapshot-store"
persistence.journal.plugin = "mock-journal"
loglevel = "DEBUG"
- loggers = ["akka.testkit.TestEventListener", "akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.testkit.TestEventListener", "org.apache.pekko.event.slf4j.Slf4jLogger"]
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
# enable to test serialization only.
serialize-messages = off
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
mock-journal {
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.raft.utils.InMemoryJournal"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-persistence-tck_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-persistence-tck_2.13</artifactId>
</dependency>
<dependency>
<groupId>commons-io</groupId>
import static java.util.Objects.requireNonNull;
-import akka.persistence.PersistentRepr;
+import org.apache.pekko.persistence.PersistentRepr;
/**
* A single entry in the data journal. We do not store {@code persistenceId} for each entry, as that is a
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorSystem;
-import akka.actor.ExtendedActorSystem;
-import akka.persistence.PersistentRepr;
-import akka.serialization.JavaSerializer;
import com.google.common.base.VerifyException;
import io.atomix.storage.journal.JournalSerdes.EntryInput;
import io.atomix.storage.journal.JournalSerdes.EntryOutput;
import io.atomix.storage.journal.JournalSerdes.EntrySerdes;
import java.io.IOException;
import java.util.concurrent.Callable;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.persistence.PersistentRepr;
+import org.apache.pekko.serialization.JavaSerializer;
import org.opendaylight.controller.akka.segjournal.DataJournalEntry.FromPersistence;
import org.opendaylight.controller.akka.segjournal.DataJournalEntry.ToPersistence;
*/
package org.opendaylight.controller.akka.segjournal;
-import akka.actor.ActorSystem;
-import akka.persistence.PersistentRepr;
import com.codahale.metrics.Histogram;
import com.google.common.base.VerifyException;
import io.atomix.storage.journal.JournalReader;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.persistence.PersistentRepr;
import org.opendaylight.controller.akka.segjournal.DataJournalEntry.FromPersistence;
import org.opendaylight.controller.akka.segjournal.DataJournalEntry.ToPersistence;
import org.opendaylight.controller.akka.segjournal.SegmentedJournalActor.ReplayMessages;
*/
package org.opendaylight.controller.akka.segjournal;
-import static akka.actor.ActorRef.noSender;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkState;
+import static org.apache.pekko.actor.ActorRef.noSender;
-import akka.actor.ActorRef;
-import akka.dispatch.Futures;
-import akka.persistence.AtomicWrite;
-import akka.persistence.PersistentRepr;
-import akka.persistence.journal.japi.AsyncWriteJournal;
import com.typesafe.config.Config;
import io.atomix.storage.journal.SegmentedJournal;
import io.atomix.storage.journal.StorageLevel;
import java.util.Map;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.persistence.AtomicWrite;
+import org.apache.pekko.persistence.PersistentRepr;
+import org.apache.pekko.persistence.journal.japi.AsyncWriteJournal;
import org.opendaylight.controller.akka.segjournal.SegmentedJournalActor.AsyncMessage;
import org.opendaylight.controller.akka.segjournal.SegmentedJournalActor.WriteMessages;
import org.slf4j.Logger;
import scala.concurrent.Future;
/**
- * An Akka persistence journal implementation on top of {@link SegmentedJournal}. This actor represents aggregation
- * of multiple journals and performs a receptionist job between Akka and invidual per-persistenceId actors. See
+ * An Pekko persistence journal implementation on top of {@link SegmentedJournal}. This actor represents aggregation
+ * of multiple journals and performs a receptionist job between Pekko and invidual per-persistenceId actors. See
* {@link SegmentedJournalActor} for details on how the persistence works.
*/
public class SegmentedFileJournal extends AsyncWriteJournal {
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
-import akka.actor.AbstractActor;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.japi.pf.ReceiveBuilder;
-import akka.persistence.AtomicWrite;
-import akka.persistence.PersistentRepr;
import com.codahale.metrics.Histogram;
import com.codahale.metrics.Meter;
import com.codahale.metrics.MetricRegistry;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
+import org.apache.pekko.actor.AbstractActor;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.japi.pf.ReceiveBuilder;
+import org.apache.pekko.persistence.AtomicWrite;
+import org.apache.pekko.persistence.PersistentRepr;
import org.opendaylight.controller.cluster.common.actor.MeteringBehavior;
import org.opendaylight.controller.cluster.reporting.MetricsReporter;
import org.opendaylight.controller.raft.journal.FromByteBufMapper;
* </ul>
*
* <p>This is a conscious design decision to minimize the amount of data that is being stored in the data journal while
- * speeding up normal operations. Since the SegmentedJournal is an append-only linear log and Akka requires the ability
+ * speeding up normal operations. Since the SegmentedJournal is an append-only linear log and Pekko requires the ability
* to delete persistence entries, we need ability to mark a subset of a SegmentedJournal as deleted. While we could
* treat such delete requests as normal events, this leads to a mismatch between SegmentedJournal indices (as exposed by
- * {@link Indexed}) and Akka sequence numbers -- requiring us to potentially perform costly deserialization to find the
+ * {@link Indexed}) and Pekko sequence numbers -- requiring us to potentially perform costly deserialization to find the
* index corresponding to a particular sequence number, or maintain moderately-complex logic and data structures to
* perform that mapping in sub-linear time complexity.
*
*/
package org.opendaylight.controller.akka.segjournal;
-import akka.persistence.japi.journal.JavaJournalSpec;
import com.typesafe.config.ConfigFactory;
import java.io.File;
import org.apache.commons.io.FileUtils;
+import org.apache.pekko.persistence.japi.journal.JavaJournalSpec;
import org.junit.runner.RunWith;
import org.scalatestplus.junit.JUnitRunner;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.PoisonPill;
-import akka.persistence.AtomicWrite;
-import akka.persistence.PersistentRepr;
-import akka.testkit.CallingThreadDispatcher;
-import akka.testkit.javadsl.TestKit;
import io.atomix.storage.journal.StorageLevel;
import java.io.File;
import java.io.IOException;
import java.util.function.Consumer;
import java.util.stream.Collectors;
import org.apache.commons.io.FileUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.persistence.AtomicWrite;
+import org.apache.pekko.persistence.PersistentRepr;
+import org.apache.pekko.testkit.CallingThreadDispatcher;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeAll;
-akka {
+pekko {
persistence {
journal {
- plugin = "akka.persistence.journal.segmented-file"
+ plugin = "pekko.persistence.journal.segmented-file"
segmented-file {
class = "org.opendaylight.controller.akka.segjournal.SegmentedFileJournal"
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
<!-- Tests -->
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
*/
package org.opendaylight.controller.cluster.datastore.admin;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.Status.Success;
-import akka.dispatch.OnComplete;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Strings;
import com.google.common.base.Throwables;
import java.util.function.Function;
import java.util.stream.Collectors;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Status.Success;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.datastore.DistributedDataStoreInterface;
final Future<ActorRef> localShardReply = actorUtils.findLocalShardAsync(shardName);
- final scala.concurrent.Promise<Object> makeLeaderLocalAsk = akka.dispatch.Futures.promise();
+ final scala.concurrent.Promise<Object> makeLeaderLocalAsk = org.apache.pekko.dispatch.Futures.promise();
localShardReply.onComplete(new OnComplete<ActorRef>() {
@Override
public void onComplete(final Throwable failure, final ActorRef actorRef) {
import static org.opendaylight.controller.cluster.datastore.MemberNode.verifyRaftPeersPresent;
import static org.opendaylight.controller.cluster.datastore.MemberNode.verifyRaftState;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.actor.Status.Success;
-import akka.cluster.Cluster;
import com.google.common.collect.Lists;
import java.io.File;
import java.nio.file.Files;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Status.Success;
+import org.apache.pekko.cluster.Cluster;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>guava-testlib</artifactId>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-persistence-tck_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-persistence-tck_2.13</artifactId>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<dependency>
<groupId>commons-io</groupId>
*/
package org.opendaylight.controller.cluster;
-import akka.actor.ActorSystem;
+import org.apache.pekko.actor.ActorSystem;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.yangtools.concepts.ObjectRegistration;
*/
package org.opendaylight.controller.cluster;
-import akka.actor.ActorSystem;
+import org.apache.pekko.actor.ActorSystem;
/**
* Listener interface for notification of ActorSystem changes from an ActorSystemProvider.
package org.opendaylight.controller.cluster;
-import akka.japi.Procedure;
-import akka.persistence.JournalProtocol;
-import akka.persistence.SnapshotProtocol;
-import akka.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.japi.Procedure;
+import org.apache.pekko.persistence.JournalProtocol;
+import org.apache.pekko.persistence.SnapshotProtocol;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
import org.eclipse.jdt.annotation.NonNull;
/**
*/
package org.opendaylight.controller.cluster;
-import akka.japi.Procedure;
-import akka.persistence.JournalProtocol;
-import akka.persistence.SnapshotProtocol;
-import akka.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.japi.Procedure;
+import org.apache.pekko.persistence.JournalProtocol;
+import org.apache.pekko.persistence.SnapshotProtocol;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
/**
* A DataPersistenceProvider implementation that delegates to another implementation.
import static java.util.Objects.requireNonNull;
-import akka.japi.Procedure;
-import akka.persistence.JournalProtocol;
-import akka.persistence.SnapshotProtocol;
-import akka.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.japi.Procedure;
+import org.apache.pekko.persistence.JournalProtocol;
+import org.apache.pekko.persistence.SnapshotProtocol;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
import org.opendaylight.controller.cluster.common.actor.ExecuteInSelfActor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import static java.util.Objects.requireNonNull;
-import akka.japi.Procedure;
-import akka.persistence.AbstractPersistentActor;
-import akka.persistence.DeleteMessagesSuccess;
-import akka.persistence.DeleteSnapshotsSuccess;
-import akka.persistence.JournalProtocol;
-import akka.persistence.SnapshotProtocol;
-import akka.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.japi.Procedure;
+import org.apache.pekko.persistence.AbstractPersistentActor;
+import org.apache.pekko.persistence.DeleteMessagesSuccess;
+import org.apache.pekko.persistence.DeleteSnapshotsSuccess;
+import org.apache.pekko.persistence.JournalProtocol;
+import org.apache.pekko.persistence.SnapshotProtocol;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
/**
* A DataPersistenceProvider implementation with persistence enabled.
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.AbstractActor;
-import akka.actor.ActorRef;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.pekko.actor.AbstractActor;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
*/
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.ActorRef;
-import akka.persistence.AbstractPersistentActor;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.persistence.AbstractPersistentActor;
import org.eclipse.jdt.annotation.NonNull;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import scala.concurrent.ExecutionContext;
public class Dispatchers {
- public static final String DEFAULT_DISPATCHER_PATH = "akka.actor.default-dispatcher";
+ public static final String DEFAULT_DISPATCHER_PATH = "pekko.actor.default-dispatcher";
public static final String CLIENT_DISPATCHER_PATH = "client-dispatcher";
public static final String TXN_DISPATCHER_PATH = "txn-dispatcher";
public static final String SHARD_DISPATCHER_PATH = "shard-dispatcher";
public static final String NOTIFICATION_DISPATCHER_PATH = "notification-dispatcher";
public static final String SERIALIZATION_DISPATCHER_PATH = "serialization-dispatcher";
- private final akka.dispatch.Dispatchers dispatchers;
+ private final org.apache.pekko.dispatch.Dispatchers dispatchers;
public enum DispatcherType {
Client(CLIENT_DISPATCHER_PATH),
this.path = path;
}
- String path(final akka.dispatch.Dispatchers knownDispatchers) {
+ String path(final org.apache.pekko.dispatch.Dispatchers knownDispatchers) {
if (knownDispatchers.hasDispatcher(path)) {
return path;
}
return DEFAULT_DISPATCHER_PATH;
}
- ExecutionContext dispatcher(final akka.dispatch.Dispatchers knownDispatchers) {
+ ExecutionContext dispatcher(final org.apache.pekko.dispatch.Dispatchers knownDispatchers) {
if (knownDispatchers.hasDispatcher(path)) {
return knownDispatchers.lookup(path);
}
}
}
- public Dispatchers(final akka.dispatch.Dispatchers dispatchers) {
+ public Dispatchers(final org.apache.pekko.dispatch.Dispatchers dispatchers) {
this.dispatchers = requireNonNull(dispatchers, "dispatchers should not be null");
}
*/
package org.opendaylight.controller.cluster.common.actor;
-import akka.japi.Procedure;
import com.google.common.annotations.Beta;
+import org.apache.pekko.japi.Procedure;
import org.eclipse.jdt.annotation.NonNull;
/**
import static java.util.Objects.requireNonNull;
-import akka.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.ControlMessage;
import org.eclipse.jdt.annotation.NonNull;
/**
*/
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.pattern.ExplicitAskSupport;
-import akka.util.Timeout;
import com.google.common.annotations.Beta;
import java.util.function.Function;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.pattern.ExplicitAskSupport;
+import org.apache.pekko.util.Timeout;
import scala.Function1;
import scala.concurrent.Future;
import scala.runtime.AbstractFunction1;
*/
@Beta
public final class ExplicitAsk {
- private static final ExplicitAskSupport ASK_SUPPORT = akka.pattern.extended.package$.MODULE$;
+ private static final ExplicitAskSupport ASK_SUPPORT = org.apache.pekko.pattern.extended.package$.MODULE$;
private ExplicitAsk() {
throw new UnsupportedOperationException();
@Singleton
public class FileAkkaConfigurationReader implements AkkaConfigurationReader {
private static final Logger LOG = LoggerFactory.getLogger(FileAkkaConfigurationReader.class);
- private static final String CUSTOM_AKKA_CONF_PATH = "./configuration/initial/akka.conf";
- private static final String FACTORY_AKKA_CONF_PATH = "./configuration/factory/akka.conf";
+ private static final String CUSTOM_AKKA_CONF_PATH = "./configuration/initial/pekko.conf";
+ private static final String FACTORY_AKKA_CONF_PATH = "./configuration/factory/pekko.conf";
@Override
public Config read() {
@Activate
void activate() {
- LOG.info("File-based Akka configuration reader enabled");
+ LOG.info("File-based Pekko configuration reader enabled");
}
@Deactivate
void deactivate() {
- LOG.info("File-based Akka configuration reader disabled");
+ LOG.info("File-based Pekko configuration reader disabled");
}
}
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.dispatch.BoundedDequeBasedMailbox;
-import akka.dispatch.MailboxType;
-import akka.dispatch.ProducesMessageQueue;
import com.codahale.metrics.Gauge;
import com.codahale.metrics.Metric;
import com.codahale.metrics.MetricRegistry;
import com.typesafe.config.Config;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.dispatch.BoundedDequeBasedMailbox;
+import org.apache.pekko.dispatch.MailboxType;
+import org.apache.pekko.dispatch.ProducesMessageQueue;
import org.opendaylight.controller.cluster.reporting.MetricsReporter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
*/
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.AbstractActor;
import com.codahale.metrics.MetricRegistry;
import com.codahale.metrics.Timer;
+import org.apache.pekko.actor.AbstractActor;
import org.opendaylight.controller.cluster.reporting.MetricsReporter;
import scala.PartialFunction;
import scala.runtime.AbstractPartialFunction;
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.ActorRef;
import java.io.Serializable;
+import org.apache.pekko.actor.ActorRef;
public class Monitor implements Serializable {
private static final long serialVersionUID = 1L;
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.Address;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterEvent;
-import akka.japi.Effect;
-import akka.remote.AssociationErrorEvent;
-import akka.remote.RemotingLifecycleEvent;
-import akka.remote.artery.ThisActorSystemQuarantinedEvent;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.HashSet;
import java.util.Set;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterEvent;
+import org.apache.pekko.japi.Effect;
+import org.apache.pekko.remote.AssociationErrorEvent;
+import org.apache.pekko.remote.RemotingLifecycleEvent;
+import org.apache.pekko.remote.artery.ThisActorSystemQuarantinedEvent;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
*/
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.dispatch.ControlMessage;
-import akka.dispatch.DequeBasedMessageQueueSemantics;
-import akka.dispatch.Envelope;
-import akka.dispatch.MailboxType;
-import akka.dispatch.ProducesMessageQueue;
-import akka.dispatch.UnboundedControlAwareMailbox;
import com.codahale.metrics.Gauge;
import com.typesafe.config.Config;
import java.util.Deque;
import java.util.Queue;
import java.util.concurrent.ConcurrentLinkedDeque;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.dispatch.ControlMessage;
+import org.apache.pekko.dispatch.DequeBasedMessageQueueSemantics;
+import org.apache.pekko.dispatch.Envelope;
+import org.apache.pekko.dispatch.MailboxType;
+import org.apache.pekko.dispatch.ProducesMessageQueue;
+import org.apache.pekko.dispatch.UnboundedControlAwareMailbox;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import scala.Option;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.io.FileBackedOutputStreamFactory;
import org.opendaylight.yangtools.concepts.Identifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.serialization.JavaSerializer;
-import akka.serialization.Serialization;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import java.io.Serializable;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.serialization.Serialization;
import org.opendaylight.yangtools.concepts.Identifier;
/**
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.serialization.JavaSerializer;
-import akka.serialization.Serialization;
import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import java.io.Serializable;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.serialization.Serialization;
import org.opendaylight.yangtools.concepts.Identifier;
/**
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import java.util.function.Predicate;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.io.FileBackedOutputStream;
import org.opendaylight.controller.cluster.io.FileBackedOutputStreamFactory;
import static com.google.common.base.Preconditions.checkState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
import java.io.Serializable;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
import org.opendaylight.controller.cluster.io.FileBackedOutputStream;
import org.opendaylight.controller.cluster.io.FileBackedOutputStreamFactory;
import org.opendaylight.yangtools.concepts.Identifier;
*/
package org.opendaylight.controller.cluster.notifications;
-import akka.actor.ActorPath;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.serialization.Serialization;
import java.util.HashMap;
import java.util.Map;
+import org.apache.pekko.actor.ActorPath;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.serialization.Serialization;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActor;
/**
import static com.google.common.base.Preconditions.checkArgument;
-import akka.actor.ExtendedActorSystem;
-import akka.dispatch.Futures;
-import akka.persistence.SelectedSnapshot;
-import akka.persistence.SnapshotMetadata;
-import akka.persistence.SnapshotSelectionCriteria;
-import akka.persistence.serialization.Snapshot;
-import akka.persistence.serialization.SnapshotSerializer;
-import akka.persistence.snapshot.japi.SnapshotStore;
-import akka.serialization.JavaSerializer;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.io.ByteStreams;
import com.typesafe.config.Config;
import java.util.stream.Collector;
import java.util.stream.Collectors;
import java.util.stream.Stream;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.persistence.SelectedSnapshot;
+import org.apache.pekko.persistence.SnapshotMetadata;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.persistence.serialization.Snapshot;
+import org.apache.pekko.persistence.serialization.SnapshotSerializer;
+import org.apache.pekko.persistence.snapshot.japi.SnapshotStore;
+import org.apache.pekko.serialization.JavaSerializer;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.io.InputOutputStreamFactory;
import org.slf4j.Logger;
*/
package org.opendaylight.controller.cluster.schema.provider.impl;
-import akka.dispatch.OnComplete;
import com.google.common.annotations.Beta;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.SettableFuture;
+import org.apache.pekko.dispatch.OnComplete;
import org.opendaylight.controller.cluster.schema.provider.RemoteYangTextSourceProvider;
import org.opendaylight.yangtools.yang.model.api.source.SourceIdentifier;
import org.opendaylight.yangtools.yang.model.api.source.YangTextSource;
@Override
public Future<Set<SourceIdentifier>> getProvidedSources() {
- return akka.dispatch.Futures.successful(providedSources);
+ return org.apache.pekko.dispatch.Futures.successful(providedSources);
}
@Override
public Future<YangTextSchemaSourceSerializationProxy> getYangTextSchemaSource(final SourceIdentifier identifier) {
LOG.trace("Sending yang schema source for {}", identifier);
- final Promise<YangTextSchemaSourceSerializationProxy> promise = akka.dispatch.Futures.promise();
+ final Promise<YangTextSchemaSourceSerializationProxy> promise = org.apache.pekko.dispatch.Futures.promise();
ListenableFuture<YangTextSource> future =
repository.getSchemaSource(identifier, YangTextSource.class);
*/
package org.opendaylight.controller.cluster.common.actor;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.DeadLetter;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
-import akka.testkit.TestKit;
import com.typesafe.config.ConfigFactory;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.DeadLetter;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
+import org.apache.pekko.testkit.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import static org.mockito.Mockito.timeout;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.event.Logging;
-import akka.japi.Effect;
-import akka.remote.AssociationErrorEvent;
-import akka.remote.InvalidAssociation;
-import akka.remote.UniqueAddress;
-import akka.remote.artery.ThisActorSystemQuarantinedEvent;
-import akka.testkit.javadsl.TestKit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.event.Logging;
+import org.apache.pekko.japi.Effect;
+import org.apache.pekko.remote.AssociationErrorEvent;
+import org.apache.pekko.remote.InvalidAssociation;
+import org.apache.pekko.remote.UniqueAddress;
+import org.apache.pekko.remote.artery.ThisActorSystemQuarantinedEvent;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.doReturn;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.google.common.io.ByteSource;
import java.io.IOException;
import java.io.InputStream;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import static org.opendaylight.controller.cluster.messaging.MessageSlicingIntegrationTest.assertFailedMessageSliceReply;
import static org.opendaylight.controller.cluster.messaging.MessageSlicingIntegrationTest.assertSuccessfulMessageSliceReply;
-import akka.actor.ActorRef;
import com.google.common.util.concurrent.Uninterruptibles;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Before;
import org.junit.Test;
import org.mockito.Mock;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorSystem;
-import akka.actor.ExtendedActorSystem;
-import akka.serialization.JavaSerializer;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.assertArrayEquals;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorSystem;
-import akka.actor.ExtendedActorSystem;
-import akka.serialization.JavaSerializer;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
-import akka.actor.ActorRef;
import com.google.common.util.concurrent.Uninterruptibles;
import java.io.IOException;
import java.io.Serializable;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Before;
import org.junit.Test;
import org.mockito.ArgumentCaptor;
import static org.mockito.Mockito.verify;
import static org.opendaylight.controller.cluster.messaging.MessageSlicerTest.slice;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectOutputStream;
import java.util.function.BiConsumer;
import java.util.function.Consumer;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
*/
package org.opendaylight.controller.cluster.notifications;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import java.util.ArrayList;
import java.util.List;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
*/
package org.opendaylight.controller.cluster.persistence;
-import akka.persistence.snapshot.SnapshotStoreSpec;
import com.typesafe.config.ConfigFactory;
import java.io.File;
import java.io.IOException;
import org.apache.commons.io.FileUtils;
+import org.apache.pekko.persistence.snapshot.SnapshotStoreSpec;
import org.junit.runner.RunWith;
import org.scalatestplus.junit.JUnitRunner;
import static org.opendaylight.controller.cluster.persistence.LocalSnapshotStoreSpecTest.cleanSnapshotDir;
import static org.opendaylight.controller.cluster.persistence.LocalSnapshotStoreSpecTest.createSnapshotDir;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.ExtendedActorSystem;
-import akka.persistence.Persistence;
-import akka.persistence.SelectedSnapshot;
-import akka.persistence.SnapshotMetadata;
-import akka.persistence.SnapshotProtocol;
-import akka.persistence.SnapshotProtocol.LoadSnapshot;
-import akka.persistence.SnapshotProtocol.LoadSnapshotFailed;
-import akka.persistence.SnapshotProtocol.LoadSnapshotResult;
-import akka.persistence.SnapshotSelectionCriteria;
-import akka.persistence.serialization.Snapshot;
-import akka.persistence.serialization.SnapshotSerializer;
-import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
import java.io.File;
import java.io.FileOutputStream;
import java.nio.charset.StandardCharsets;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.persistence.Persistence;
+import org.apache.pekko.persistence.SelectedSnapshot;
+import org.apache.pekko.persistence.SnapshotMetadata;
+import org.apache.pekko.persistence.SnapshotProtocol;
+import org.apache.pekko.persistence.SnapshotProtocol.LoadSnapshot;
+import org.apache.pekko.persistence.SnapshotProtocol.LoadSnapshotFailed;
+import org.apache.pekko.persistence.SnapshotProtocol.LoadSnapshotResult;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.persistence.serialization.Snapshot;
+import org.apache.pekko.persistence.serialization.SnapshotSerializer;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.mock;
-import akka.dispatch.ExecutionContexts;
-import akka.dispatch.Futures;
import com.google.common.io.CharSource;
import com.google.common.util.concurrent.MoreExecutors;
import java.io.IOException;
import java.util.concurrent.ExecutionException;
+import org.apache.pekko.dispatch.ExecutionContexts;
+import org.apache.pekko.dispatch.Futures;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.schema.provider.RemoteYangTextSourceProvider;
-akka {
+pekko {
persistence {
snapshot-store.local.class = "org.opendaylight.controller.cluster.persistence.LocalSnapshotStore"
- snapshot-store.plugin = akka.persistence.snapshot-store.local
+ snapshot-store.plugin = pekko.persistence.snapshot-store.local
snapshot-store.local.dir = "target/snapshots"
snapshot-store.local.use-lz4-compression = false
}
<configuration>
<artifacts>
<artifact>
- <file>${project.build.directory}/classes/initial/akka.conf</file>
+ <file>${project.build.directory}/classes/initial/pekko.conf</file>
<type>xml</type>
- <classifier>akkaconf</classifier>
+ <classifier>pekkoconf</classifier>
</artifact>
<artifact>
- <file>${project.build.directory}/classes/initial/factory-akka.conf</file>
+ <file>${project.build.directory}/classes/initial/factory-pekko.conf</file>
<type>xml</type>
- <classifier>factoryakkaconf</classifier>
+ <classifier>factorypekkoconf</classifier>
</artifact>
<artifact>
<file>${project.build.directory}/classes/initial/module-shards.conf</file>
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
loglevel = "INFO"
- loggers = ["akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.event.slf4j.Slf4jLogger"]
logger-startup-timeout = 300s
# JFR requires boot delegation, which we do not have by default
actor {
warn-about-java-serializer-usage = off
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
- java = "akka.serialization.JavaSerializer"
- proto = "akka.remote.serialization.ProtobufSerializer"
+ java = "org.apache.pekko.serialization.JavaSerializer"
+ proto = "org.apache.pekko.remote.serialization.ProtobufSerializer"
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
simpleReplicatedLogEntry = "org.opendaylight.controller.cluster.raft.persisted.SimpleReplicatedLogEntrySerializer"
}
default-mailbox {
# When not using a BalancingDispatcher it is recommended that we use the SingleConsumerOnlyUnboundedMailbox
# as it is the most efficient for multiple producer/single consumer use cases
- mailbox-type="akka.dispatch.SingleConsumerOnlyUnboundedMailbox"
+ mailbox-type="org.apache.pekko.dispatch.SingleConsumerOnlyUnboundedMailbox"
}
}
remote {
notify-subscribers-interval = 20 ms
}
- downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
+ downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
split-brain-resolver {
active-strategy = keep-majority
# is stored in a separate directory, with multiple segment files. Segments are removed
# when they are no longer required.
#
- plugin = akka.persistence.journal.segmented-file
+ plugin = pekko.persistence.journal.segmented-file
segmented-file {
class = "org.opendaylight.controller.akka.segjournal.SegmentedFileJournal"
}
snapshot-store.local.class = "org.opendaylight.controller.cluster.persistence.LocalSnapshotStore"
- snapshot-store.plugin = akka.persistence.snapshot-store.local
+ snapshot-store.plugin = pekko.persistence.snapshot-store.local
}
}
odl-cluster-data {
- akka {
+ pekko {
remote {
artery {
enabled = on
cluster {
# Using artery.
- seed-nodes = ["akka://opendaylight-cluster-data@127.0.0.1:2550"]
+ seed-nodes = ["pekko://opendaylight-cluster-data@127.0.0.1:2550"]
roles = [
"member-1"
<artifactId>org.osgi.service.metatype.annotations</artifactId>
</dependency>
- <!-- Akka -->
+ <!-- Pekko -->
<dependency>
<groupId>org.scala-lang.modules</groupId>
<artifactId>scala-java8-compat_2.13</artifactId>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<!-- Scala -->
*/
package org.opendaylight.controller.cluster.akka.impl;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.actor.Terminated;
-import akka.dispatch.OnComplete;
import com.typesafe.config.Config;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.dispatch.OnComplete;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.ActorSystemProvider;
import org.opendaylight.controller.cluster.ActorSystemProviderListener;
*/
package org.opendaylight.controller.cluster.akka.osgi.impl;
-import akka.osgi.BundleDelegatingClassLoader;
import java.security.AccessController;
import java.security.PrivilegedAction;
+import org.apache.pekko.osgi.BundleDelegatingClassLoader;
import org.osgi.framework.BundleContext;
public final class BundleClassLoaderFactory {
*/
package org.opendaylight.controller.cluster.akka.osgi.impl;
-import akka.actor.ActorSystem;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorSystem;
import org.opendaylight.controller.cluster.ActorSystemProvider;
import org.opendaylight.controller.cluster.ActorSystemProviderListener;
import org.opendaylight.controller.cluster.akka.impl.ActorSystemProviderImpl;
*/
package org.opendaylight.controller.cluster.akka.osgi.impl;
-import akka.actor.Props;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigException;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.common.actor.QuarantinedMonitorActor;
import org.osgi.framework.BundleContext;
import org.slf4j.Logger;
private static final Logger LOG = LoggerFactory.getLogger(QuarantinedMonitorActorPropsFactory.class);
private static final String DEFAULT_HANDLING_DISABLED =
- "akka.disable-default-actor-system-quarantined-event-handling";
+ "pekko.disable-default-actor-system-quarantined-event-handling";
private QuarantinedMonitorActorPropsFactory() {
return QuarantinedMonitorActor.props(() -> { });
}
} catch (ConfigException configEx) {
- LOG.info("Akka config doesn't contain property {}. Therefore default handling will be used",
+ LOG.info("Pekko config doesn't contain property {}. Therefore default handling will be used",
DEFAULT_HANDLING_DISABLED);
}
return QuarantinedMonitorActor.props(() -> {
*/
package org.opendaylight.controller.cluster.databroker;
-import akka.actor.ActorSystem;
import com.google.common.annotations.VisibleForTesting;
+import org.apache.pekko.actor.ActorSystem;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.databroker.actors.dds.DataStoreClient;
import org.opendaylight.controller.cluster.datastore.AbstractDataStore;
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.util.Timeout;
import com.google.common.base.Throwables;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.util.Timeout;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.client.AbstractClientActor;
import org.opendaylight.controller.cluster.access.client.ClientActorConfig;
*/
package org.opendaylight.controller.cluster.databroker.actors.dds;
-import akka.actor.ActorRef;
-import akka.actor.Status;
import com.google.common.base.Throwables;
import com.google.common.base.Verify;
import java.util.ArrayList;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.locks.StampedLock;
import java.util.stream.Stream;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Status;
import org.opendaylight.controller.cluster.access.client.ClientActorBehavior;
import org.opendaylight.controller.cluster.access.client.ClientActorContext;
import org.opendaylight.controller.cluster.access.client.ConnectedClientConnection;
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects;
import com.google.common.collect.Iterables;
import com.google.common.util.concurrent.FluentFuture;
import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.util.Timeout;
import com.google.common.primitives.UnsignedLong;
import java.util.Set;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.atomic.AtomicLong;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.util.Timeout;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
*/
package org.opendaylight.controller.cluster.databroker.actors.dds;
-import akka.actor.Props;
+import org.apache.pekko.actor.Props;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.client.AbstractClientActor;
import org.opendaylight.controller.cluster.access.client.ClientActorContext;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
/**
* Request the ClientIdentifier from a particular actor. Response is an instance of {@link DataStoreClient}.
import static com.google.common.base.Verify.verifyNotNull;
-import akka.dispatch.ExecutionContexts;
-import akka.dispatch.OnComplete;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.google.common.collect.ImmutableBiMap;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.TimeUnit;
import java.util.stream.Stream;
+import org.apache.pekko.dispatch.ExecutionContexts;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.client.BackendInfoResolver;
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.collect.ImmutableList;
import com.google.common.primitives.UnsignedLong;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.checkerframework.checker.lock.qual.Holding;
import org.eclipse.jdt.annotation.NonNull;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.base.MoreObjects.ToStringHelper;
import com.google.common.primitives.UnsignedLong;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.ABIVersion;
import org.opendaylight.controller.cluster.access.client.BackendInfo;
import org.opendaylight.controller.cluster.access.concepts.LocalHistoryIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.Props;
+import org.apache.pekko.actor.Props;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.client.AbstractClientActor;
import org.opendaylight.controller.cluster.access.client.ClientActorContext;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
import com.google.common.annotations.Beta;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Throwables;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.common.actor.Dispatchers;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorContext;
-import akka.actor.ActorRef;
-import akka.actor.Props;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.pekko.actor.ActorContext;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.common.actor.Dispatchers;
import org.opendaylight.yangtools.yang.data.tree.api.DataTreeCandidate;
import org.slf4j.Logger;
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorRef;
-import akka.actor.Address;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Address;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
public interface ClusterWrapper {
import static com.google.common.base.Preconditions.checkState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterEvent;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterEvent;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
public class ClusterWrapperImpl implements ClusterWrapper {
cluster = Cluster.get(requireNonNull(actorSystem, "actorSystem should not be null"));
checkState(cluster.getSelfRoles().size() > 0,
- "No akka roles were specified.\n"
+ "No pekko roles were specified.\n"
+ "One way to specify the member name is to pass a property on the command line like so\n"
- + " -Dakka.cluster.roles.0=member-3\n"
+ + " -Dpekko.cluster.roles.0=member-3\n"
+ "member-3 here would be the name of the member");
currentMemberName = MemberName.forName(cluster.getSelfRoles().iterator().next());
import static com.google.common.base.Preconditions.checkState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.primitives.UnsignedLong;
import com.google.common.util.concurrent.FutureCallback;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.List;
import java.util.Optional;
import java.util.SortedSet;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.controller.cluster.datastore.ShardCommitCoordinator.CohortDecorator;
import org.opendaylight.controller.cluster.datastore.modification.Modification;
import static com.google.common.base.Preconditions.checkState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Status;
-import akka.actor.Status.Failure;
-import akka.dispatch.ExecutionContexts;
-import akka.dispatch.Futures;
-import akka.dispatch.OnComplete;
-import akka.dispatch.Recover;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.google.common.collect.Lists;
import java.util.AbstractMap.SimpleImmutableEntry;
import java.util.ArrayList;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.Executor;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.dispatch.ExecutionContexts;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.dispatch.Recover;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.controller.cluster.datastore.DataTreeCohortActor.CanCommit;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Props;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActor;
import org.opendaylight.controller.cluster.datastore.messages.DataTreeChanged;
import org.opendaylight.controller.cluster.datastore.messages.DataTreeChangedReply;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.PoisonPill;
-import akka.dispatch.OnComplete;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.util.concurrent.MoreExecutors;
import java.util.concurrent.Executor;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.dispatch.OnComplete;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.datastore.exceptions.LocalShardNotFoundException;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
import java.util.ArrayList;
import java.util.Collection;
import java.util.concurrent.ConcurrentHashMap;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
import org.opendaylight.controller.cluster.datastore.actors.DataTreeNotificationListenerRegistrationActor;
import org.opendaylight.controller.cluster.datastore.messages.EnableNotification;
import org.opendaylight.controller.cluster.datastore.messages.RegisterDataTreeChangeListener;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.actor.Status;
import com.google.common.util.concurrent.FutureCallback;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.Executor;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Status;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.actor.Status;
-import akka.util.Timeout;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.Multimap;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.Executor;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.mdsal.common.api.LogicalDatastoreType;
import org.opendaylight.mdsal.dom.api.DOMDataTreeCandidate;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.dispatch.OnComplete;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.opendaylight.controller.cluster.datastore.exceptions.LocalShardNotFoundException;
import org.opendaylight.controller.cluster.datastore.utils.ActorUtils;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.util.Timeout;
import com.google.common.annotations.VisibleForTesting;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import org.apache.commons.text.WordUtils;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.access.client.AbstractClientConnection;
import org.opendaylight.controller.cluster.access.client.ClientActorConfig;
import org.opendaylight.controller.cluster.common.actor.AkkaConfigurationReader;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.opendaylight.controller.cluster.datastore.messages.RegisterDataTreeChangeListener;
import org.opendaylight.yangtools.concepts.Registration;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
import java.util.List;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.datastore.messages.DataTreeChanged;
import org.opendaylight.controller.cluster.datastore.messages.OnInitialData;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorPath;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.Props;
+import org.apache.pekko.actor.ActorPath;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Props;
/**
* Base class for factories instantiating delegates which are local to the
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.google.common.base.Stopwatch;
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.datastore.messages.OnDemandShardState;
import org.opendaylight.controller.cluster.raft.client.messages.GetOnDemandRaftState;
import scala.concurrent.Await;
import static com.google.common.base.Verify.verify;
import static com.google.common.base.Verify.verifyNotNull;
-import akka.actor.ActorRef;
-import akka.actor.Props;
import com.google.common.collect.Iterables;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.datastore.messages.DataTreeChanged;
import org.opendaylight.controller.cluster.datastore.messages.OnInitialData;
import org.opendaylight.mdsal.dom.api.DOMDataTreeChangeListener;
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.PoisonPill;
-import akka.dispatch.OnComplete;
import com.google.common.collect.Maps;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.dispatch.OnComplete;
import org.checkerframework.checker.lock.qual.GuardedBy;
import org.checkerframework.checker.lock.qual.Holding;
import org.eclipse.jdt.annotation.NonNull;
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.Cancellable;
-import akka.actor.ExtendedActorSystem;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
-import akka.actor.Status;
-import akka.actor.Status.Failure;
-import akka.persistence.RecoveryCompleted;
-import akka.persistence.SnapshotOffer;
-import akka.serialization.JavaSerializer;
-import akka.serialization.Serialization;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Ticker;
import com.google.common.collect.ImmutableList;
import java.util.OptionalLong;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Cancellable;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.persistence.RecoveryCompleted;
+import org.apache.pekko.persistence.SnapshotOffer;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.serialization.Serialization;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.access.ABIVersion;
private static final Collection<ABIVersion> SUPPORTED_ABIVERSIONS;
- // Make sure to keep this in sync with the journal configuration in factory-akka.conf
- public static final String NON_PERSISTENT_JOURNAL_ID = "akka.persistence.non-persistent.journal";
+ // Make sure to keep this in sync with the journal configuration in factory-pekko.conf
+ public static final String NON_PERSISTENT_JOURNAL_ID = "pekko.persistence.non-persistent.journal";
static {
final ABIVersion[] values = ABIVersion.values();
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Status.Failure;
-import akka.serialization.Serialization;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.primitives.UnsignedLong;
import com.google.common.util.concurrent.FutureCallback;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.serialization.Serialization;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.controller.cluster.datastore.messages.AbortTransactionReply;
*/
package org.opendaylight.controller.cluster.datastore;
-import static akka.actor.ActorRef.noSender;
import static com.google.common.base.Preconditions.checkState;
import static com.google.common.base.Verify.verify;
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
import static java.util.Objects.requireNonNullElse;
+import static org.apache.pekko.actor.ActorRef.noSender;
-import akka.actor.ActorRef;
-import akka.util.Timeout;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Stopwatch;
import com.google.common.collect.ImmutableList;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.UnaryOperator;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.util.Timeout;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.access.concepts.LocalHistoryIdentifier;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorContext;
-import akka.actor.ActorRef;
-import akka.actor.Props;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorContext;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
import org.opendaylight.mdsal.dom.api.DOMDataTreeChangeListener;
import org.opendaylight.yangtools.concepts.Registration;
import org.opendaylight.yangtools.yang.data.api.YangInstanceIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.Props;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.actor.Props;
import org.opendaylight.mdsal.dom.api.DOMDataTreeChangeListener;
import org.opendaylight.yangtools.concepts.Registration;
import org.opendaylight.yangtools.yang.data.api.YangInstanceIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.dispatch.Futures;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.google.common.base.Throwables;
import com.google.common.collect.Streams;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.stream.Collectors;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.datastore.jmx.mbeans.shard.ShardDataTreeListenerInfoMXBean;
import org.opendaylight.controller.cluster.datastore.messages.GetInfo;
import org.opendaylight.controller.cluster.datastore.messages.OnDemandShardState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.datastore.messages.DataExists;
import org.opendaylight.controller.cluster.datastore.messages.ReadData;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.datastore.messages.DataExists;
import org.opendaylight.controller.cluster.datastore.messages.ReadData;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorContext;
-import akka.actor.ActorRef;
import com.google.common.io.ByteSource;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.OutputStream;
import java.util.Optional;
+import org.apache.pekko.actor.ActorContext;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.access.concepts.FrontendIdentifier;
import org.opendaylight.controller.cluster.access.concepts.FrontendType;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorRef;
import com.google.common.base.Joiner;
import com.google.common.base.Joiner.MapJoiner;
import java.time.Instant;
import java.time.format.DateTimeFormatter;
import java.util.List;
import java.util.concurrent.atomic.AtomicLong;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.datastore.jmx.mbeans.shard.ShardStatsMXBean;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
-import akka.actor.ReceiveTimeout;
-import akka.japi.Creator;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.ReceiveTimeout;
+import org.apache.pekko.japi.Creator;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActorWithMetering;
import org.opendaylight.controller.cluster.datastore.messages.CloseTransaction;
final boolean ret = transaction.isClosed();
if (ret) {
shardStats.incrementFailedReadTransactionsCount();
- getSender().tell(new akka.actor.Status.Failure(new ReadFailedException("Transaction is closed")),
- getSelf());
+ getSender().tell(new org.apache.pekko.actor.Status.Failure(
+ new ReadFailedException("Transaction is closed")), getSelf());
}
return ret;
}
import static java.util.Objects.requireNonNull;
-import akka.actor.AbstractActor.ActorContext;
-import akka.actor.ActorRef;
import java.util.concurrent.atomic.AtomicLong;
+import org.apache.pekko.actor.AbstractActor.ActorContext;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.access.concepts.FrontendIdentifier;
import org.opendaylight.controller.cluster.access.concepts.LocalHistoryIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Cancellable;
-import akka.actor.Status.Failure;
import java.io.Closeable;
import java.util.LinkedHashSet;
import java.util.Set;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Cancellable;
+import org.apache.pekko.actor.Status.Failure;
import org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.controller.cluster.datastore.messages.BatchedModifications;
import org.opendaylight.controller.cluster.datastore.messages.BatchedModificationsReply;
}
} catch (Exception e) {
lastBatchedModificationsException = e;
- getSender().tell(new akka.actor.Status.Failure(e), getSelf());
+ getSender().tell(new org.apache.pekko.actor.Status.Failure(e), getSelf());
if (batched.isReady()) {
getSelf().tell(PoisonPill.getInstance(), getSelf());
private boolean checkClosed() {
final boolean ret = transaction.isClosed();
if (ret) {
- getSender().tell(new akka.actor.Status.Failure(new IllegalStateException(
+ getSender().tell(new org.apache.pekko.actor.Status.Failure(new IllegalStateException(
"Transaction is closed, no modifications allowed")), getSelf());
}
return ret;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.Terminated;
-import akka.actor.UntypedAbstractActor;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.actor.UntypedAbstractActor;
import org.opendaylight.controller.cluster.common.actor.Monitor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Cancellable;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
import com.google.common.annotations.VisibleForTesting;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Cancellable;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
import org.eclipse.jdt.annotation.NonNullByDefault;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActor;
import org.opendaylight.controller.cluster.datastore.messages.CloseDataTreeNotificationListenerRegistration;
import static com.google.common.base.Preconditions.checkState;
import static java.util.Objects.requireNonNull;
-import akka.actor.Props;
import com.google.gson.stream.JsonWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
+import org.apache.pekko.actor.Props;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActor;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Props;
import java.io.IOException;
import java.io.ObjectOutputStream;
import java.io.OutputStream;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActorWithMetering;
import org.opendaylight.controller.cluster.datastore.persisted.ShardDataTreeSnapshot;
import org.opendaylight.controller.cluster.datastore.persisted.ShardSnapshotState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNullByDefault;
@NonNullByDefault
package org.opendaylight.controller.cluster.datastore.messages;
-import akka.actor.ActorRef;
+import org.apache.pekko.actor.ActorRef;
/**
* LocalShardFound is a message that is sent by the
/**
* Message sent to local shard to try to gain shard leadership. Sender of this
* message will be notified about result of leadership transfer with
- * {@link akka.actor.Status.Success}, if leadership is successfully transferred
- * to local shard. Otherwise {@link akka.actor.Status.Failure} with
+ * {@link org.apache.pekko.actor.Status.Success}, if leadership is successfully transferred
+ * to local shard. Otherwise {@link org.apache.pekko.actor.Status.Failure} with
* {@link org.opendaylight.controller.cluster.raft.LeadershipTransferFailedException}
* will be sent to sender of this message.
*/
*/
package org.opendaylight.controller.cluster.datastore.messages;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
import java.util.Collection;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
import org.opendaylight.controller.cluster.raft.client.messages.OnDemandRaftState;
/**
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorSelection;
import java.util.Optional;
+import org.apache.pekko.actor.ActorSelection;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.yangtools.yang.data.tree.api.ReadOnlyDataTree;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ExtendedActorSystem;
-import akka.serialization.JSerializer;
-import akka.util.ClassLoaderObjectInputStream;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import org.apache.commons.lang3.SerializationUtils;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.serialization.JSerializer;
+import org.apache.pekko.util.ClassLoaderObjectInputStream;
import org.opendaylight.controller.cluster.datastore.utils.AbstractBatchedModificationsCursor;
/**
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorPath;
-import akka.actor.ActorRef;
import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
+import org.apache.pekko.actor.ActorPath;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.cluster.datastore.node.utils.stream.SerializationUtils;
import org.opendaylight.yangtools.yang.data.api.YangInstanceIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorPath;
-import akka.actor.ActorRef;
import java.io.Serializable;
+import org.apache.pekko.actor.ActorPath;
+import org.apache.pekko.actor.ActorRef;
/**
* Successful reply to a notification listener registration request.
import static com.google.common.base.Preconditions.checkState;
import static java.util.Objects.requireNonNull;
-import akka.actor.Props;
import com.google.common.util.concurrent.SettableFuture;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.datastore.AbstractDataStore;
import org.opendaylight.controller.cluster.datastore.ClusterWrapper;
import org.opendaylight.controller.cluster.datastore.DatastoreContextFactory;
/**
* Local ShardManager message to register a callback to be notified of shard availability changes. The reply to
* this message is a {@link org.opendaylight.yangtools.concepts.Registration} instance wrapped in a
- * {@link akka.actor.Status.Success}.
+ * {@link org.apache.pekko.actor.Status.Success}.
*
* @author Thomas Pantelis
*/
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.serialization.Serialization;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Strings;
import java.util.HashSet;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.serialization.Serialization;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.datastore.DatastoreContext;
import org.opendaylight.controller.cluster.datastore.Shard;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Address;
-import akka.actor.Cancellable;
-import akka.actor.OneForOneStrategy;
-import akka.actor.PoisonPill;
-import akka.actor.Status;
-import akka.actor.SupervisorStrategy;
-import akka.actor.SupervisorStrategy.Directive;
-import akka.cluster.ClusterEvent;
-import akka.cluster.ClusterEvent.MemberWeaklyUp;
-import akka.cluster.Member;
-import akka.dispatch.Futures;
-import akka.dispatch.OnComplete;
-import akka.japi.Function;
-import akka.pattern.Patterns;
-import akka.persistence.DeleteSnapshotsFailure;
-import akka.persistence.DeleteSnapshotsSuccess;
-import akka.persistence.RecoveryCompleted;
-import akka.persistence.SaveSnapshotFailure;
-import akka.persistence.SaveSnapshotSuccess;
-import akka.persistence.SnapshotOffer;
-import akka.persistence.SnapshotSelectionCriteria;
-import akka.util.Timeout;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.util.concurrent.SettableFuture;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.concurrent.TimeoutException;
import java.util.function.Consumer;
import java.util.function.Supplier;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Cancellable;
+import org.apache.pekko.actor.OneForOneStrategy;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.actor.SupervisorStrategy;
+import org.apache.pekko.actor.SupervisorStrategy.Directive;
+import org.apache.pekko.cluster.ClusterEvent;
+import org.apache.pekko.cluster.ClusterEvent.MemberWeaklyUp;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.japi.Function;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.persistence.DeleteSnapshotsFailure;
+import org.apache.pekko.persistence.DeleteSnapshotsSuccess;
+import org.apache.pekko.persistence.RecoveryCompleted;
+import org.apache.pekko.persistence.SaveSnapshotFailure;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
+import org.apache.pekko.persistence.SnapshotOffer;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedPersistentActorWithMetering;
import org.opendaylight.controller.cluster.common.actor.Dispatchers;
*/
package org.opendaylight.controller.cluster.datastore.shardmanager;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
-import akka.actor.ReceiveTimeout;
-import akka.actor.Status.Failure;
-import akka.actor.UntypedAbstractActor;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.ReceiveTimeout;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.actor.UntypedAbstractActor;
import org.opendaylight.controller.cluster.datastore.identifiers.ShardIdentifier;
import org.opendaylight.controller.cluster.datastore.persisted.DatastoreSnapshot;
import org.opendaylight.controller.cluster.datastore.persisted.DatastoreSnapshot.ShardSnapshot;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.pattern.Patterns;
import com.google.common.base.Throwables;
import java.util.List;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.pattern.Patterns;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.datastore.identifiers.ShardIdentifier;
import org.opendaylight.controller.cluster.raft.RaftState;
import static java.util.Objects.requireNonNull;
-import akka.actor.Address;
-import akka.actor.AddressFromURIString;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.AddressFromURIString;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.datastore.identifiers.ShardIdentifier;
import org.opendaylight.controller.cluster.datastore.identifiers.ShardManagerIdentifier;
*/
package org.opendaylight.controller.cluster.datastore.utils;
-import akka.actor.ActorPath;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.dispatch.Mapper;
-import akka.dispatch.OnComplete;
-import akka.pattern.AskTimeoutException;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.codahale.metrics.MetricRegistry;
import com.codahale.metrics.Timer;
import com.google.common.base.Preconditions;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.LongAdder;
import java.util.function.Function;
+import org.apache.pekko.actor.ActorPath;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.dispatch.Mapper;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.pattern.AskTimeoutException;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.common.actor.Dispatchers;
import org.opendaylight.controller.cluster.datastore.ClusterWrapper;
*/
package org.opendaylight.controller.cluster.datastore.utils;
-import akka.dispatch.OnComplete;
import java.util.ArrayList;
import java.util.List;
+import org.apache.pekko.dispatch.OnComplete;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
*/
package org.opendaylight.controller.cluster.datastore.utils;
-import akka.dispatch.Futures;
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
+import org.apache.pekko.dispatch.Futures;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.opendaylight.controller.cluster.datastore.messages.PrimaryShardInfo;
default false;
type boolean;
description "Use lz4 compression for snapshots, sent from leader to follower, for snapshots stored
- by LocalSnapshotStore, use akka.conf configuration.";
+ by LocalSnapshotStore, use pekko.conf configuration.";
}
leaf export-on-recovery {
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
-import akka.util.Timeout;
import com.google.common.base.Stopwatch;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.util.Timeout;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
*/
package org.opendaylight.controller.cluster.databroker;
-import akka.actor.ActorSystem;
+import org.apache.pekko.actor.ActorSystem;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.databroker.actors.dds.DataStoreClient;
import org.opendaylight.controller.cluster.datastore.ClusterWrapper;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.HISTORY_ID;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.TRANSACTION_ID;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import java.util.List;
import java.util.Map;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
import com.google.common.primitives.UnsignedLong;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
import org.junit.Test;
import org.mockito.Mock;
import org.opendaylight.controller.cluster.access.ABIVersion;
import static org.mockito.Mockito.verify;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.CLIENT_ID;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Status;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import java.util.List;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Ticker;
import com.google.common.primitives.UnsignedLong;
import java.util.ArrayList;
import java.util.Optional;
import java.util.function.BiFunction;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.hamcrest.BaseMatcher;
import org.hamcrest.Description;
import org.junit.After;
import static org.junit.Assert.assertThrows;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.assertOperationThrowsException;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.getWithTimeout;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.google.common.primitives.UnsignedLong;
import com.google.common.util.concurrent.ListenableFuture;
import java.util.ArrayList;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.stream.Collectors;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.TRANSACTION_ID;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.getWithTimeout;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.google.common.primitives.UnsignedLong;
import com.google.common.util.concurrent.ListenableFuture;
import java.util.Optional;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.verify;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.assertFutureEquals;
-import akka.testkit.TestProbe;
import com.google.common.base.Ticker;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.Test;
import org.mockito.ArgumentCaptor;
import org.mockito.invocation.InvocationOnMock;
import static org.mockito.Mockito.when;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.assertOperationThrowsException;
-import akka.testkit.TestProbe;
import com.google.common.base.Ticker;
import com.google.common.base.VerifyException;
import java.util.Optional;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.commands.AbortLocalTransactionRequest;
import org.opendaylight.controller.cluster.access.commands.ModifyTransactionRequest;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.assertFutureEquals;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.assertOperationThrowsException;
-import akka.testkit.TestProbe;
import com.google.common.base.Ticker;
import com.google.common.util.concurrent.ListenableFuture;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.Test;
import org.mockito.Mock;
import org.opendaylight.controller.cluster.access.commands.AbortLocalTransactionRequest;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Status;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.List;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import static org.opendaylight.controller.cluster.databroker.actors.dds.TestUtils.assertFutureEquals;
-import akka.testkit.TestProbe;
import com.google.common.util.concurrent.FluentFuture;
import com.google.common.util.concurrent.ListenableFuture;
import java.util.List;
import java.util.Optional;
+import org.apache.pekko.testkit.TestProbe;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.commands.ExistsTransactionRequest;
import org.opendaylight.controller.cluster.access.commands.ExistsTransactionSuccess;
import static org.junit.Assert.assertThrows;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorSystem;
-import akka.testkit.TestProbe;
-import akka.testkit.javadsl.TestKit;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.TestProbe;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
*/
package org.opendaylight.controller.cluster.databroker.actors.dds;
-import akka.actor.ActorRef;
-import akka.testkit.TestProbe;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.testkit.TestProbe;
import org.eclipse.jdt.annotation.NonNull;
import org.junit.Assert;
import org.opendaylight.controller.cluster.access.ABIVersion;
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorSystem;
-import akka.testkit.javadsl.TestKit;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorSystem;
-import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import static org.mockito.Mockito.timeout;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorSystem;
import com.google.common.base.Throwables;
import com.google.common.collect.ImmutableMap;
import com.google.common.util.concurrent.FluentFuture;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
+import org.apache.pekko.actor.ActorSystem;
import org.junit.Ignore;
import org.junit.Test;
import org.junit.runners.Parameterized.Parameter;
import static org.opendaylight.controller.cluster.datastore.ShardDataTreeMocking.successfulCommit;
import static org.opendaylight.controller.cluster.datastore.ShardDataTreeMocking.successfulPreCommit;
-import akka.actor.ActorRef;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
-import akka.dispatch.Dispatchers;
-import akka.japi.Creator;
-import akka.pattern.Patterns;
-import akka.testkit.TestActorRef;
-import akka.util.Timeout;
import com.google.common.primitives.UnsignedLong;
import com.google.common.util.concurrent.FutureCallback;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.japi.Creator;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.util.Timeout;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
*/
package org.opendaylight.controller.cluster.datastore;
-import akka.actor.ActorSystem;
-import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
import java.util.ArrayList;
import java.util.Collection;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.atomic.AtomicLong;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.opendaylight.controller.cluster.access.concepts.ClientIdentifier;
import org.opendaylight.controller.cluster.access.concepts.FrontendIdentifier;
import static org.mockito.Mockito.verify;
import static org.opendaylight.controller.md.cluster.datastore.model.TestModel.TEST_PATH;
-import akka.actor.ActorRef;
-import akka.actor.DeadLetter;
-import akka.actor.Props;
-import akka.testkit.javadsl.TestKit;
import com.google.common.collect.ImmutableList;
import java.time.Duration;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.DeadLetter;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.datastore.messages.DataTreeChanged;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.actor.Terminated;
-import akka.dispatch.ExecutionContexts;
-import akka.dispatch.Futures;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.util.concurrent.MoreExecutors;
import com.google.common.util.concurrent.Uninterruptibles;
import java.time.Duration;
import java.util.concurrent.Executor;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.dispatch.ExecutionContexts;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.eclipse.jdt.annotation.NonNullByDefault;
import org.junit.Test;
import org.mockito.ArgumentCaptor;
import static org.opendaylight.controller.md.cluster.datastore.model.TestModel.outerNodeEntry;
import static org.opendaylight.controller.md.cluster.datastore.model.TestModel.testNodeWithOuter;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.pattern.Patterns;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import java.time.Duration;
import java.util.AbstractMap.SimpleEntry;
import java.util.Map.Entry;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.reset;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.google.common.util.concurrent.FluentFuture;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.AddressFromURIString;
-import akka.cluster.Cluster;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Throwables;
import com.google.common.util.concurrent.FluentFuture;
import com.typesafe.config.ConfigFactory;
import java.util.Collection;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.AddressFromURIString;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Ignore;
@BeforeClass
public static void setUpClass() {
system = ActorSystem.create("cluster-test", ConfigFactory.load().getConfig("Member1"));
- final Address member1Address = AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558");
+ final Address member1Address = AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558");
Cluster.get(system).join(member1Address);
}
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.AddressFromURIString;
-import akka.cluster.Cluster;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Throwables;
import com.google.common.util.concurrent.FluentFuture;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.AddressFromURIString;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
InMemorySnapshotStore.clear();
InMemoryJournal.clear();
system = ActorSystem.create("cluster-test", ConfigFactory.load().getConfig("Member1"));
- Address member1Address = AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558");
+ Address member1Address = AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558");
Cluster.get(system).join(member1Address);
}
import static org.mockito.Mockito.timeout;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.AddressFromURIString;
-import akka.cluster.Cluster;
-import akka.cluster.Member;
-import akka.dispatch.Futures;
-import akka.pattern.Patterns;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Stopwatch;
import com.google.common.base.Throwables;
import com.google.common.collect.ImmutableMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicLong;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.AddressFromURIString;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
private static final String[] CARS = {"cars"};
private static final Address MEMBER_1_ADDRESS = AddressFromURIString.parse(
- "akka://cluster-test@127.0.0.1:2558");
+ "pekko://cluster-test@127.0.0.1:2558");
private static final Address MEMBER_2_ADDRESS = AddressFromURIString.parse(
- "akka://cluster-test@127.0.0.1:2559");
+ "pekko://cluster-test@127.0.0.1:2559");
private static final String MODULE_SHARDS_CARS_ONLY_1_2 = "module-shards-cars-member-1-and-2.conf";
private static final String MODULE_SHARDS_CARS_PEOPLE_1_2 = "module-shards-member1-and-2.conf";
carsFollowerShard.orElseThrow().tell(readyLocal, followerTestKit.getRef());
Object resp = followerTestKit.expectMsgClass(Object.class);
- if (resp instanceof akka.actor.Status.Failure) {
- throw new AssertionError("Unexpected failure response", ((akka.actor.Status.Failure)resp).cause());
+ if (resp instanceof org.apache.pekko.actor.Status.Failure) {
+ throw new AssertionError("Unexpected failure response",
+ ((org.apache.pekko.actor.Status.Failure)resp).cause());
}
assertEquals("Response type", CommitTransactionReply.class, resp.getClass());
carsFollowerShard.orElseThrow().tell(readyLocal, followerTestKit.getRef());
resp = followerTestKit.expectMsgClass(Object.class);
- if (resp instanceof akka.actor.Status.Failure) {
- throw new AssertionError("Unexpected failure response", ((akka.actor.Status.Failure)resp).cause());
+ if (resp instanceof org.apache.pekko.actor.Status.Failure) {
+ throw new AssertionError("Unexpected failure response",
+ ((org.apache.pekko.actor.Status.Failure)resp).cause());
}
assertEquals("Response type", ReadyTransactionReply.class, resp.getClass());
carsFollowerShard.orElseThrow().tell(forwardedReady, followerTestKit.getRef());
Object resp = followerTestKit.expectMsgClass(Object.class);
- if (resp instanceof akka.actor.Status.Failure) {
- throw new AssertionError("Unexpected failure response", ((akka.actor.Status.Failure)resp).cause());
+ if (resp instanceof org.apache.pekko.actor.Status.Failure) {
+ throw new AssertionError("Unexpected failure response",
+ ((org.apache.pekko.actor.Status.Failure)resp).cause());
}
assertEquals("Response type", CommitTransactionReply.class, resp.getClass());
carsFollowerShard.orElseThrow().tell(forwardedReady, followerTestKit.getRef());
resp = followerTestKit.expectMsgClass(Object.class);
- if (resp instanceof akka.actor.Status.Failure) {
- throw new AssertionError("Unexpected failure response", ((akka.actor.Status.Failure)resp).cause());
+ if (resp instanceof org.apache.pekko.actor.Status.Failure) {
+ throw new AssertionError("Unexpected failure response",
+ ((org.apache.pekko.actor.Status.Failure)resp).cause());
}
assertEquals("Response type", ReadyTransactionReply.class, resp.getClass());
import static org.junit.Assert.assertTrue;
import static org.opendaylight.controller.md.cluster.datastore.model.CarsModel.CAR_QNAME;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.AddressFromURIString;
-import akka.cluster.Cluster;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Stopwatch;
import com.google.common.util.concurrent.Uninterruptibles;
import com.typesafe.config.ConfigFactory;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import org.apache.commons.io.FileUtils;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.AddressFromURIString;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
ConfigFactory.load("segmented.conf").getConfig("Member1"));
cleanSnapshotDir(system);
- Address member1Address = AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558");
+ Address member1Address = AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558");
Cluster.get(system).join(member1Address);
}
private static void cleanSnapshotDir(final ActorSystem system) {
File journalDir = new File(system.settings().config()
- .getString("akka.persistence.journal.segmented-file.root-directory"));
+ .getString("pekko.persistence.journal.segmented-file.root-directory"));
if (!journalDir.exists()) {
return;
import static org.junit.Assert.assertSame;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorRef;
import java.util.List;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.datastore.messages.DataTreeChanged;
import org.opendaylight.controller.cluster.raft.utils.MessageCollectorActor;
import static org.mockito.Mockito.verifyNoMoreInteractions;
import static org.mockito.Mockito.when;
-import akka.actor.ActorRef;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.commands.ModifyTransactionRequestBuilder;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterEvent.CurrentClusterState;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
import com.google.common.base.Stopwatch;
import com.google.common.collect.Sets;
import com.google.common.util.concurrent.ListenableFuture;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterEvent.CurrentClusterState;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
import org.opendaylight.controller.cluster.databroker.ClientBackedDataStore;
import org.opendaylight.controller.cluster.datastore.DatastoreContext.Builder;
import org.opendaylight.controller.cluster.datastore.config.Configuration;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.AddressFromURIString;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterEvent.CurrentClusterState;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
import com.google.common.base.Stopwatch;
import com.google.common.util.concurrent.Uninterruptibles;
import com.typesafe.config.Config;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.AddressFromURIString;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterEvent.CurrentClusterState;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.databroker.ClientBackedDataStore;
import org.opendaylight.controller.cluster.datastore.identifiers.ShardIdentifier;
* @author Thomas Pantelis
*/
public class MemberNode {
- private static final String MEMBER_1_ADDRESS = "akka://cluster-test@127.0.0.1:2558";
+ private static final String MEMBER_1_ADDRESS = "pekko://cluster-test@127.0.0.1:2558";
private IntegrationTestKit kit;
private ClientBackedDataStore configDataStore;
}
ActorSystem system = ActorSystem.create("cluster-test", config);
- String member1Address = useAkkaArtery ? MEMBER_1_ADDRESS : MEMBER_1_ADDRESS.replace("akka", "akka.tcp");
+ String member1Address = useAkkaArtery ? MEMBER_1_ADDRESS : MEMBER_1_ADDRESS.replace("pekko", "pekko.tcp");
Cluster.get(system).join(AddressFromURIString.parse(member1Address));
node.kit = new IntegrationTestKit(system, datastoreContextBuilder);
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
-import akka.actor.ActorRef;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.notifications.LeaderStateChanged;
import static org.mockito.Mockito.timeout;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorSelection;
-import akka.testkit.javadsl.TestKit;
import com.google.common.collect.ImmutableList;
import java.time.Duration;
import java.util.List;
import java.util.Set;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Test;
import org.opendaylight.controller.cluster.datastore.config.Configuration;
import org.opendaylight.controller.cluster.datastore.exceptions.NotInitializedException;
import static org.opendaylight.controller.md.cluster.datastore.model.TestModel.outerMapNode;
import static org.opendaylight.controller.md.cluster.datastore.model.TestModel.outerNode;
-import akka.dispatch.Dispatchers;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.collect.ImmutableSortedSet;
import java.time.Duration;
import java.util.SortedSet;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import static org.mockito.Mockito.mock;
import static org.opendaylight.controller.cluster.datastore.DataStoreVersions.CURRENT_VERSION;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.Props;
-import akka.actor.Status.Failure;
-import akka.dispatch.Dispatchers;
-import akka.dispatch.OnComplete;
-import akka.japi.Creator;
-import akka.pattern.Patterns;
-import akka.persistence.SaveSnapshotSuccess;
-import akka.testkit.TestActorRef;
-import akka.util.Timeout;
import com.google.common.base.Stopwatch;
import com.google.common.base.Throwables;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicReference;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.japi.Creator;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.util.Timeout;
import org.junit.Test;
import org.mockito.InOrder;
import org.opendaylight.controller.cluster.DataPersistenceProvider;
RegisterDataTreeNotificationListenerReply.class);
final String replyPath = reply.getListenerRegistrationPath().toString();
assertTrue("Incorrect reply path: " + replyPath,
- replyPath.matches("akka:\\/\\/test\\/user\\/testRegisterDataTreeChangeListener\\/\\$.*"));
+ replyPath.matches("pekko:\\/\\/test\\/user\\/testRegisterDataTreeChangeListener\\/\\$.*"));
final YangInstanceIdentifier path = TestModel.TEST_PATH;
writeToStore(shard, path, ImmutableNodes.containerNode(TestModel.TEST_QNAME));
.peerAddresses(Collections.<String, String>singletonMap(peerID.toString(), null))
.props().withDispatcher(Dispatchers.DefaultDispatcherId()), "testPeerAddressResolved");
- final String address = "akka://foobar";
+ final String address = "pekko://foobar";
shard.tell(new PeerAddressResolved(peerID.toString(), address), ActorRef.noSender());
shard.tell(GetOnDemandRaftState.INSTANCE, testKit.getRef());
final ReadyTransactionReply readyReply = ReadyTransactionReply
.fromSerializable(testKit.expectMsgClass(duration, ReadyTransactionReply.class));
- String pathSuffix = shard.path().toString().replaceFirst("akka://test", "");
+ String pathSuffix = shard.path().toString().replaceFirst("pekko://test", "");
assertThat(readyReply.getCohortPath(), endsWith(pathSuffix));
// Send the CanCommitTransaction message for the first Tx.
BatchedModifications batched = new BatchedModifications(transactionID, CURRENT_VERSION);
batched.addModification(new MergeModification(TestModel.TEST_PATH, invalidData));
shard.tell(batched, testKit.getRef());
- Failure failure = testKit.expectMsgClass(Duration.ofSeconds(5), akka.actor.Status.Failure.class);
+ Failure failure = testKit.expectMsgClass(Duration.ofSeconds(5), org.apache.pekko.actor.Status.Failure.class);
final Throwable cause = failure.cause();
shard.tell(batched, testKit.getRef());
- failure = testKit.expectMsgClass(Duration.ofSeconds(5), akka.actor.Status.Failure.class);
+ failure = testKit.expectMsgClass(Duration.ofSeconds(5), org.apache.pekko.actor.Status.Failure.class);
assertEquals("Failure cause", cause, failure.cause());
}
// and trigger the 2nd Tx to proceed.
shard.tell(new CommitTransaction(transactionID1, CURRENT_VERSION).toSerializable(), testKit.getRef());
- testKit.expectMsgClass(duration, akka.actor.Status.Failure.class);
+ testKit.expectMsgClass(duration, org.apache.pekko.actor.Status.Failure.class);
// Wait for the 2nd Tx to complete the canCommit phase.
// and trigger the 2nd Tx to proceed.
shard.tell(new CommitTransaction(transactionID1, CURRENT_VERSION).toSerializable(), testKit.getRef());
- testKit.expectMsgClass(duration, akka.actor.Status.Failure.class);
+ testKit.expectMsgClass(duration, org.apache.pekko.actor.Status.Failure.class);
// Wait for the 2nd Tx to complete the canCommit phase.
// Send the CanCommitTransaction message.
shard.tell(new CanCommitTransaction(transactionID1, CURRENT_VERSION).toSerializable(), testKit.getRef());
- testKit.expectMsgClass(duration, akka.actor.Status.Failure.class);
+ testKit.expectMsgClass(duration, org.apache.pekko.actor.Status.Failure.class);
// Send another can commit to ensure the failed one got cleaned
// up.
ImmutableNodes.containerNode(TestModel.TEST_QNAME), true), testKit.getRef());
}
- testKit.expectMsgClass(duration, akka.actor.Status.Failure.class);
+ testKit.expectMsgClass(duration, org.apache.pekko.actor.Status.Failure.class);
// Send another can commit to ensure the failed one got cleaned
// up.
// current Tx.
shard.tell(new CommitTransaction(transactionID1, CURRENT_VERSION).toSerializable(), testKit.getRef());
- testKit.expectMsgClass(duration, akka.actor.Status.Failure.class);
+ testKit.expectMsgClass(duration, org.apache.pekko.actor.Status.Failure.class);
// Commit the 2nd Tx.
//
// shard.tell(prepareReadyTransactionMessage(false, shard.underlyingActor(), cohort3, transactionID3,
// modification3), getRef());
-// expectMsgClass(duration, akka.actor.Status.Failure.class);
+// expectMsgClass(duration, org.apache.pekko.actor.Status.Failure.class);
//
// // canCommit 1st Tx.
//
// // canCommit the 3rd Tx - should exceed queue capacity and fail.
//
// shard.tell(new CanCommitTransaction(transactionID3, CURRENT_VERSION).toSerializable(), getRef());
-// expectMsgClass(duration, akka.actor.Status.Failure.class);
+// expectMsgClass(duration, org.apache.pekko.actor.Status.Failure.class);
// }};
// }
"testCanCommitBeforeReadyFailure");
shard.tell(new CanCommitTransaction(nextTransactionId(), CURRENT_VERSION).toSerializable(), testKit.getRef());
- testKit.expectMsgClass(Duration.ofSeconds(5), akka.actor.Status.Failure.class);
+ testKit.expectMsgClass(Duration.ofSeconds(5), org.apache.pekko.actor.Status.Failure.class);
}
@Test
// Now send CanCommitTransaction - should fail.
shard.tell(new CanCommitTransaction(transactionID1, CURRENT_VERSION).toSerializable(), testKit.getRef());
- final Throwable failure = testKit.expectMsgClass(duration, akka.actor.Status.Failure.class).cause();
+ final Throwable failure = testKit.expectMsgClass(duration, org.apache.pekko.actor.Status.Failure.class).cause();
assertTrue("Failure type", failure instanceof IllegalStateException);
// Ready and CanCommit another and verify success.
.createTestActor(Shard.builder().id(followerShardID)
.datastoreContext(dataStoreContextBuilder.shardElectionTimeoutFactor(1000).build())
.peerAddresses(Collections.singletonMap(leaderShardID.toString(),
- "akka://test/user/" + leaderShardID.toString()))
+ "pekko://test/user/" + leaderShardID.toString()))
.schemaContextProvider(() -> SCHEMA_CONTEXT).props()
.withDispatcher(Dispatchers.DefaultDispatcherId()), followerShardID.toString());
final TestActorRef<Shard> leaderShard = actorFactory
.createTestActor(Shard.builder().id(leaderShardID).datastoreContext(newDatastoreContext())
.peerAddresses(Collections.singletonMap(followerShardID.toString(),
- "akka://test/user/" + followerShardID.toString()))
+ "pekko://test/user/" + followerShardID.toString()))
.schemaContextProvider(() -> SCHEMA_CONTEXT).props()
.withDispatcher(Dispatchers.DefaultDispatcherId()), leaderShardID.toString());
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.pattern.Patterns;
-import akka.testkit.javadsl.EventFilter;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.testkit.javadsl.EventFilter;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.raft.client.messages.FindLeader;
import org.opendaylight.controller.cluster.raft.client.messages.FindLeaderReply;
import org.slf4j.Logger;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.testkit.TestActorRef;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.testkit.TestActorRef;
import org.junit.Before;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
final TestActorRef<ShardTransaction> subject = TestActorRef.create(getSystem(), props,
"testNegativeReadWithReadOnlyTransactionClosed");
- Future<Object> future = akka.pattern.Patterns.ask(subject,
+ Future<Object> future = org.apache.pekko.pattern.Patterns.ask(subject,
new ReadData(YangInstanceIdentifier.of(), DataStoreVersions.CURRENT_VERSION), 3000);
Await.result(future, FiniteDuration.create(3, TimeUnit.SECONDS));
subject.underlyingActor().getDOMStoreTransaction().abortFromTransactionActor();
- future = akka.pattern.Patterns.ask(subject, new ReadData(YangInstanceIdentifier.of(),
+ future = org.apache.pekko.pattern.Patterns.ask(subject, new ReadData(YangInstanceIdentifier.of(),
DataStoreVersions.CURRENT_VERSION), 3000);
Await.result(future, FiniteDuration.create(3, TimeUnit.SECONDS));
}
final TestActorRef<ShardTransaction> subject = TestActorRef.create(getSystem(), props,
"testNegativeReadWithReadWriteTransactionClosed");
- Future<Object> future = akka.pattern.Patterns.ask(subject,
+ Future<Object> future = org.apache.pekko.pattern.Patterns.ask(subject,
new ReadData(YangInstanceIdentifier.of(), DataStoreVersions.CURRENT_VERSION), 3000);
Await.result(future, FiniteDuration.create(3, TimeUnit.SECONDS));
subject.underlyingActor().getDOMStoreTransaction().abortFromTransactionActor();
- future = akka.pattern.Patterns.ask(subject, new ReadData(YangInstanceIdentifier.of(),
+ future = org.apache.pekko.pattern.Patterns.ask(subject, new ReadData(YangInstanceIdentifier.of(),
DataStoreVersions.CURRENT_VERSION), 3000);
Await.result(future, FiniteDuration.create(3, TimeUnit.SECONDS));
}
final TestActorRef<ShardTransaction> subject = TestActorRef.create(getSystem(), props,
"testNegativeExistsWithReadWriteTransactionClosed");
- Future<Object> future = akka.pattern.Patterns.ask(subject,
+ Future<Object> future = org.apache.pekko.pattern.Patterns.ask(subject,
new DataExists(YangInstanceIdentifier.of(), DataStoreVersions.CURRENT_VERSION), 3000);
Await.result(future, FiniteDuration.create(3, TimeUnit.SECONDS));
subject.underlyingActor().getDOMStoreTransaction().abortFromTransactionActor();
- future = akka.pattern.Patterns.ask(subject,
+ future = org.apache.pekko.pattern.Patterns.ask(subject,
new DataExists(YangInstanceIdentifier.of(), DataStoreVersions.CURRENT_VERSION), 3000);
Await.result(future, FiniteDuration.create(3, TimeUnit.SECONDS));
}
import static org.mockito.Mockito.inOrder;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.actor.Status.Failure;
-import akka.actor.Terminated;
-import akka.dispatch.Dispatchers;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Throwables;
import java.time.Duration;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Before;
import org.junit.Test;
import org.mockito.InOrder;
batched.addModification(new WriteModification(path, node));
transaction.tell(batched, testKit.getRef());
- testKit.expectMsgClass(Duration.ofSeconds(5), akka.actor.Status.Failure.class);
+ testKit.expectMsgClass(Duration.ofSeconds(5), org.apache.pekko.actor.Status.Failure.class);
batched = new BatchedModifications(tx1, DataStoreVersions.CURRENT_VERSION);
batched.setReady();
batched.setTotalMessagesSent(2);
transaction.tell(batched, testKit.getRef());
- Failure failure = testKit.expectMsgClass(Duration.ofSeconds(5), akka.actor.Status.Failure.class);
+ Failure failure = testKit.expectMsgClass(Duration.ofSeconds(5), org.apache.pekko.actor.Status.Failure.class);
watcher.expectMsgClass(Duration.ofSeconds(5), Terminated.class);
if (failure != null) {
transaction.tell(batched, testKit.getRef());
- Failure failure = testKit.expectMsgClass(Duration.ofSeconds(5), akka.actor.Status.Failure.class);
+ Failure failure = testKit.expectMsgClass(Duration.ofSeconds(5), org.apache.pekko.actor.Status.Failure.class);
watcher.expectMsgClass(Duration.ofSeconds(5), Terminated.class);
if (failure != null) {
import static com.google.common.base.Preconditions.checkState;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorSelection;
-import akka.dispatch.OnComplete;
import com.google.common.util.concurrent.FutureCallback;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Supplier;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.dispatch.OnComplete;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.controller.cluster.datastore.messages.AbortTransaction;
import org.opendaylight.controller.cluster.datastore.messages.AbortTransactionReply;
actorUtils.getTransactionCommitOperationTimeout()));
}
- return akka.dispatch.Futures.sequence(futureList, actorUtils.getClientDispatcher());
+ return org.apache.pekko.dispatch.Futures.sequence(futureList, actorUtils.getClientDispatcher());
}
@Override
import static org.mockito.Mockito.lenient;
import static org.opendaylight.controller.cluster.datastore.DataStoreVersions.CURRENT_VERSION;
-import akka.actor.ActorSelection;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
-import akka.dispatch.Dispatchers;
-import akka.dispatch.Futures;
-import akka.testkit.TestActorRef;
import com.codahale.metrics.Snapshot;
import com.codahale.metrics.Timer;
import com.google.common.util.concurrent.ListenableFuture;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.testkit.TestActorRef;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
assertEquals(name + " transactionId", builder.transactionId, actualMessage.getTransactionId());
if (reply instanceof Throwable) {
- getSender().tell(new akka.actor.Status.Failure((Throwable)reply), self());
+ getSender().tell(new org.apache.pekko.actor.Status.Failure((Throwable)reply), self());
} else {
getSender().tell(reply, self());
}
import static org.mockito.Mockito.timeout;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorRef;
-import akka.testkit.javadsl.TestKit;
import java.time.Duration;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
-import akka.actor.ActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.io.ByteSource;
import java.io.ByteArrayOutputStream;
import java.io.ObjectInputStream;
import java.time.Duration;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Test;
import org.opendaylight.controller.cluster.datastore.AbstractActorTest;
import org.opendaylight.controller.cluster.datastore.persisted.MetadataShardDataTreeSnapshot;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
-import akka.actor.ExtendedActorSystem;
-import akka.testkit.javadsl.TestKit;
import com.google.common.collect.ImmutableSortedSet;
import java.io.NotSerializableException;
import java.util.List;
import java.util.Optional;
import java.util.SortedSet;
+import org.apache.pekko.actor.ExtendedActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.concepts.TransactionIdentifier;
import org.opendaylight.controller.cluster.datastore.AbstractTest;
import static org.junit.Assert.assertEquals;
-import akka.actor.ActorRef;
-import akka.actor.Status.Failure;
-import akka.actor.Terminated;
-import akka.testkit.javadsl.TestKit;
import java.time.Duration;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.datastore.AbstractActorTest;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.AddressFromURIString;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
-import akka.actor.Status;
-import akka.actor.Status.Failure;
-import akka.actor.Status.Success;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterEvent;
-import akka.cluster.Member;
-import akka.dispatch.Dispatchers;
-import akka.dispatch.OnComplete;
-import akka.japi.Creator;
-import akka.pattern.Patterns;
-import akka.persistence.RecoveryCompleted;
-import akka.serialization.Serialization;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.base.Stopwatch;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.stream.Collectors;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.AddressFromURIString;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Status;
+import org.apache.pekko.actor.Status.Failure;
+import org.apache.pekko.actor.Status.Success;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterEvent;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.japi.Creator;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.persistence.RecoveryCompleted;
+import org.apache.pekko.serialization.Serialization;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
// Create an ActorSystem ShardManager actor for member-1.
final ActorSystem system1 = newActorSystem("Member1");
- Cluster.get(system1).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system1).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
final TestActorRef<TestShardManager> shardManager1 = TestActorRef.create(system1,
newTestShardMgrBuilderWithMockShardActor().cluster(
final ActorSystem system2 = newActorSystem("Member2");
- Cluster.get(system2).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system2).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
final ActorRef mockShardActor2 = newMockShardActor(system2, "astronauts", "member-2");
// This part times out quite a bit on jenkins for some reason
-// Cluster.get(system2).down(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+// Cluster.get(system2).down(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
//
// shardManager1.underlyingActor().waitForMemberRemoved();
//
// Create an ActorSystem ShardManager actor for member-1.
final ActorSystem system1 = newActorSystem("Member1");
- Cluster.get(system1).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system1).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
final ActorRef mockShardActor1 = newMockShardActor(system1, Shard.DEFAULT_NAME, "member-1");
final ActorSystem system2 = newActorSystem("Member2");
- Cluster.get(system2).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system2).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
final ActorRef mockShardActor2 = newMockShardActor(system2, Shard.DEFAULT_NAME, "member-2");
String path = found.getPrimaryPath();
assertTrue("Unexpected primary path " + path, path.contains("member-2-shard-default-config"));
- shardManager1.tell(MockClusterWrapper.createUnreachableMember("member-2", "akka://cluster-test@127.0.0.1:2558"),
- kit.getRef());
+ shardManager1.tell(MockClusterWrapper.createUnreachableMember("member-2",
+ "pekko://cluster-test@127.0.0.1:2558"), kit.getRef());
shardManager1.underlyingActor().waitForUnreachableMember();
MessageCollectorActor.clearMessages(mockShardActor1);
- shardManager1.tell(MockClusterWrapper.createMemberRemoved("member-2", "akka://cluster-test@127.0.0.1:2558"),
+ shardManager1.tell(MockClusterWrapper.createMemberRemoved("member-2", "pekko://cluster-test@127.0.0.1:2558"),
kit.getRef());
shardManager1.tell(new FindPrimary("default", true), kit.getRef());
kit.expectMsgClass(Duration.ofSeconds(5), NoShardLeaderException.class);
- shardManager1.tell(MockClusterWrapper.createReachableMember("member-2", "akka://cluster-test@127.0.0.1:2558"),
+ shardManager1.tell(MockClusterWrapper.createReachableMember("member-2", "pekko://cluster-test@127.0.0.1:2558"),
kit.getRef());
shardManager1.underlyingActor().waitForReachableMember();
String path1 = found1.getPrimaryPath();
assertTrue("Unexpected primary path " + path1, path1.contains("member-2-shard-default-config"));
- shardManager1.tell(MockClusterWrapper.createMemberUp("member-2", "akka://cluster-test@127.0.0.1:2558"),
+ shardManager1.tell(MockClusterWrapper.createMemberUp("member-2", "pekko://cluster-test@127.0.0.1:2558"),
kit.getRef());
// Test FindPrimary wait succeeds after reachable member event.
shardManager1.tell(MockClusterWrapper.createUnreachableMember("member-2",
- "akka://cluster-test@127.0.0.1:2558"), kit.getRef());
+ "pekko://cluster-test@127.0.0.1:2558"), kit.getRef());
shardManager1.underlyingActor().waitForUnreachableMember();
shardManager1.tell(new FindPrimary("default", true), kit.getRef());
shardManager1.tell(
- MockClusterWrapper.createReachableMember("member-2", "akka://cluster-test@127.0.0.1:2558"), kit.getRef());
+ MockClusterWrapper.createReachableMember("member-2", "pekko://cluster-test@127.0.0.1:2558"), kit.getRef());
RemotePrimaryShardFound found2 = kit.expectMsgClass(Duration.ofSeconds(5), RemotePrimaryShardFound.class);
String path2 = found2.getPrimaryPath();
// Create an ActorSystem ShardManager actor for member-1.
final ActorSystem system1 = newActorSystem("Member1");
- Cluster.get(system1).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system1).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
final ActorRef mockShardActor1 = newMockShardActor(system1, Shard.DEFAULT_NAME, "member-1");
final ActorSystem system2 = newActorSystem("Member2");
- Cluster.get(system2).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system2).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
final ActorRef mockShardActor2 = newMockShardActor(system2, Shard.DEFAULT_NAME, "member-2");
system1.actorSelection(mockShardActor1.path()), DataStoreVersions.CURRENT_VERSION));
shardManager1.tell(MockClusterWrapper.createUnreachableMember("member-2",
- "akka://cluster-test@127.0.0.1:2558"), kit.getRef());
+ "pekko://cluster-test@127.0.0.1:2558"), kit.getRef());
shardManager1.underlyingActor().waitForUnreachableMember();
final ActorSystem system256 = newActorSystem("Member256");
// 2562 is the tcp port of Member256 in src/test/resources/application.conf.
- Cluster.get(system256).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2562"));
+ Cluster.get(system256).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2562"));
final ActorRef mockShardActor256 = newMockShardActor(system256, Shard.DEFAULT_NAME, "member-256");
final ActorSystem system2 = newActorSystem("Member2");
// Join member-2 into the cluster of member-256.
- Cluster.get(system2).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2562"));
+ Cluster.get(system2).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2562"));
final ActorRef mockShardActor2 = newMockShardActor(system2, Shard.DEFAULT_NAME, "member-2");
// Simulate member-2 become unreachable.
shardManager256.tell(MockClusterWrapper.createUnreachableMember("member-2",
- "akka://cluster-test@127.0.0.1:2558"), kit256.getRef());
+ "pekko://cluster-test@127.0.0.1:2558"), kit256.getRef());
shardManager256.underlyingActor().waitForUnreachableMember();
// Make sure leader shard on member-256 is still leader and still in the cache.
// Create an ActorSystem ShardManager actor for member-1.
final ActorSystem system1 = newActorSystem("Member1");
- Cluster.get(system1).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system1).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
ActorRef mockDefaultShardActor = newMockShardActor(system1, Shard.DEFAULT_NAME, "member-1");
final TestActorRef<TestShardManager> newReplicaShardManager = TestActorRef.create(system1,
newTestShardMgrBuilder(mockConfig).shardActor(mockDefaultShardActor)
// Create an ActorSystem ShardManager actor for member-2.
final ActorSystem system2 = newActorSystem("Member2");
- Cluster.get(system2).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system2).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
String memberId2 = "member-2-shard-astronauts-" + shardMrgIDSuffix;
String name = ShardIdentifier.create("astronauts", MEMBER_2, "config").toString();
newReplicaShardManager.tell(new UpdateSchemaContext(TEST_SCHEMA_CONTEXT), kit.getRef());
MockClusterWrapper.sendMemberUp(newReplicaShardManager, "member-2",
- AddressFromURIString.parse("akka://non-existent@127.0.0.1:5").toString());
+ AddressFromURIString.parse("pekko://non-existent@127.0.0.1:5").toString());
newReplicaShardManager.tell(new AddShardReplica("astronauts"), kit.getRef());
Status.Failure resp = kit.expectMsgClass(Duration.ofSeconds(5), Status.Failure.class);
// Create an ActorSystem ShardManager actor for member-1.
final ActorSystem system1 = newActorSystem("Member1");
- Cluster.get(system1).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system1).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
ActorRef mockDefaultShardActor = newMockShardActor(system1, Shard.DEFAULT_NAME, "member-1");
final TestActorRef<TestShardManager> newReplicaShardManager = TestActorRef.create(system1,
// Create an ActorSystem ShardManager actor for member-2.
final ActorSystem system2 = newActorSystem("Member2");
- Cluster.get(system2).join(AddressFromURIString.parse("akka://cluster-test@127.0.0.1:2558"));
+ Cluster.get(system2).join(AddressFromURIString.parse("pekko://cluster-test@127.0.0.1:2558"));
String name = ShardIdentifier.create("default", MEMBER_2, shardMrgIDSuffix).toString();
String memberId2 = "member-2-shard-default-" + shardMrgIDSuffix;
shardManagerID);
// Because mockShardLeaderActor is created at the top level of the actor system it has an address like so,
- // akka://cluster-test@127.0.0.1:2559/user/member-2-shard-default-config1
+ // pekko://cluster-test@127.0.0.1:2559/user/member-2-shard-default-config1
// However when a shard manager has a local shard which is a follower and a leader that is remote it will
// try to compute an address for the remote shard leader using the ShardPeerAddressResolver. This address will
// look like so,
- // akka://cluster-test@127.0.0.1:2559/user/shardmanager-config1/member-2-shard-default-config1
+ // pekko://cluster-test@127.0.0.1:2559/user/shardmanager-config1/member-2-shard-default-config1
// In this specific case if we did a FindPrimary for shard default from member-1 we would come up
// with the address of an actor which does not exist, therefore any message sent to that actor would go to
// dead letters.
import static org.junit.Assert.assertEquals;
-import akka.actor.Address;
import com.google.common.collect.Sets;
import java.util.Collection;
+import org.apache.pekko.actor.Address;
import org.junit.Test;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.datastore.identifiers.ShardIdentifier;
String peerId = ShardIdentifier.create("default", MEMBER_2, type).toString();
- String address = "akka://opendaylight-cluster-data@127.0.0.1:2550/user/shardmanager-" + type
+ String address = "pekko://opendaylight-cluster-data@127.0.0.1:2550/user/shardmanager-" + type
+ "/" + MEMBER_2.getName() + "-shard-default-" + type;
resolver.setResolved(peerId, address);
*/
package org.opendaylight.controller.cluster.datastore.shardmanager;
-import akka.actor.ActorRef;
-import akka.actor.Props;
import java.util.Map;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.datastore.DatastoreContext;
import org.opendaylight.controller.cluster.datastore.TestShard;
import org.opendaylight.controller.cluster.datastore.identifiers.ShardIdentifier;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
-import akka.dispatch.Futures;
-import akka.japi.Creator;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.collect.Sets;
import com.typesafe.config.ConfigFactory;
import java.time.Duration;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.japi.Creator;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.junit.Assert;
import org.junit.Test;
import org.mockito.Mockito;
// even if the path is in local format, match the primary path (first 3 elements) and return true
clusterWrapper.setSelfAddress(new Address("akka", "test"));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertTrue(actorUtils.isPathLocal("akka://test/user/$a"));
+ assertTrue(actorUtils.isPathLocal("pekko://test/user/$a"));
clusterWrapper.setSelfAddress(new Address("akka", "test"));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertTrue(actorUtils.isPathLocal("akka://test/user/$a"));
+ assertTrue(actorUtils.isPathLocal("pekko://test/user/$a"));
clusterWrapper.setSelfAddress(new Address("akka", "test"));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertTrue(actorUtils.isPathLocal("akka://test/user/token2/token3/$a"));
+ assertTrue(actorUtils.isPathLocal("pekko://test/user/token2/token3/$a"));
// self address of remote format,but Tx path local format.
clusterWrapper.setSelfAddress(new Address("akka", "system", "127.0.0.1", 2550));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertTrue(actorUtils.isPathLocal("akka://system/user/shardmanager/shard/transaction"));
+ assertTrue(actorUtils.isPathLocal("pekko://system/user/shardmanager/shard/transaction"));
// self address of local format,but Tx path remote format.
clusterWrapper.setSelfAddress(new Address("akka", "system"));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertFalse(actorUtils.isPathLocal("akka://system@127.0.0.1:2550/user/shardmanager/shard/transaction"));
+ assertFalse(actorUtils.isPathLocal("pekko://system@127.0.0.1:2550/user/shardmanager/shard/transaction"));
//local path but not same
clusterWrapper.setSelfAddress(new Address("akka", "test"));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertTrue(actorUtils.isPathLocal("akka://test1/user/$a"));
+ assertTrue(actorUtils.isPathLocal("pekko://test1/user/$a"));
//ip and port same
clusterWrapper.setSelfAddress(new Address("akka", "system", "127.0.0.1", 2550));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertTrue(actorUtils.isPathLocal("akka://system@127.0.0.1:2550/"));
+ assertTrue(actorUtils.isPathLocal("pekko://system@127.0.0.1:2550/"));
// forward-slash missing in address
clusterWrapper.setSelfAddress(new Address("akka", "system", "127.0.0.1", 2550));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertFalse(actorUtils.isPathLocal("akka://system@127.0.0.1:2550"));
+ assertFalse(actorUtils.isPathLocal("pekko://system@127.0.0.1:2550"));
//ips differ
clusterWrapper.setSelfAddress(new Address("akka", "system", "127.0.0.1", 2550));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertFalse(actorUtils.isPathLocal("akka://system@127.1.0.1:2550/"));
+ assertFalse(actorUtils.isPathLocal("pekko://system@127.1.0.1:2550/"));
//ports differ
clusterWrapper.setSelfAddress(new Address("akka", "system", "127.0.0.1", 2550));
actorUtils = new ActorUtils(getSystem(), null, clusterWrapper, mock(Configuration.class));
- assertFalse(actorUtils.isPathLocal("akka://system@127.0.0.1:2551/"));
+ assertFalse(actorUtils.isPathLocal("pekko://system@127.0.0.1:2551/"));
}
@Test
.logicalStoreType(LogicalDatastoreType.CONFIGURATION)
.shardLeaderElectionTimeout(100, TimeUnit.MILLISECONDS).build();
- final String expPrimaryPath = "akka://test-system/find-primary-shard";
+ final String expPrimaryPath = "pekko://test-system/find-primary-shard";
final short expPrimaryVersion = DataStoreVersions.CURRENT_VERSION;
ActorUtils actorUtils = new ActorUtils(getSystem(), shardManager, mock(ClusterWrapper.class),
mock(Configuration.class), dataStoreContext, new PrimaryShardInfoFutureCache()) {
.shardLeaderElectionTimeout(100, TimeUnit.MILLISECONDS).build();
final DataTree mockDataTree = Mockito.mock(DataTree.class);
- final String expPrimaryPath = "akka://test-system/find-primary-shard";
+ final String expPrimaryPath = "pekko://test-system/find-primary-shard";
ActorUtils actorUtils = new ActorUtils(getSystem(), shardManager, mock(ClusterWrapper.class),
mock(Configuration.class), dataStoreContext, new PrimaryShardInfoFutureCache()) {
@Override
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.mock;
-import akka.dispatch.MessageDispatcher;
+import org.apache.pekko.dispatch.MessageDispatcher;
import org.junit.Test;
import org.opendaylight.controller.cluster.common.actor.Dispatchers;
@Test
public void testGetDefaultDispatcherPath() {
- akka.dispatch.Dispatchers mockDispatchers = mock(akka.dispatch.Dispatchers.class);
+ org.apache.pekko.dispatch.Dispatchers mockDispatchers = mock(org.apache.pekko.dispatch.Dispatchers.class);
doReturn(false).when(mockDispatchers).hasDispatcher(anyString());
Dispatchers dispatchers = new Dispatchers(mockDispatchers);
@Test
public void testGetDefaultDispatcher() {
- akka.dispatch.Dispatchers mockDispatchers = mock(akka.dispatch.Dispatchers.class);
+ org.apache.pekko.dispatch.Dispatchers mockDispatchers = mock(org.apache.pekko.dispatch.Dispatchers.class);
MessageDispatcher mockGlobalDispatcher = mock(MessageDispatcher.class);
doReturn(false).when(mockDispatchers).hasDispatcher(anyString());
doReturn(mockGlobalDispatcher).when(mockDispatchers).defaultGlobalDispatcher();
@Test
public void testGetDispatcherPath() {
- akka.dispatch.Dispatchers mockDispatchers = mock(akka.dispatch.Dispatchers.class);
+ org.apache.pekko.dispatch.Dispatchers mockDispatchers = mock(org.apache.pekko.dispatch.Dispatchers.class);
doReturn(true).when(mockDispatchers).hasDispatcher(anyString());
Dispatchers dispatchers = new Dispatchers(mockDispatchers);
@Test
public void testGetDispatcher() {
- akka.dispatch.Dispatchers mockDispatchers = mock(akka.dispatch.Dispatchers.class);
+ org.apache.pekko.dispatch.Dispatchers mockDispatchers = mock(org.apache.pekko.dispatch.Dispatchers.class);
MessageDispatcher mockDispatcher = mock(MessageDispatcher.class);
doReturn(true).when(mockDispatchers).hasDispatcher(anyString());
doReturn(mockDispatcher).when(mockDispatchers).lookup(anyString());
*/
package org.opendaylight.controller.cluster.datastore.utils;
-import akka.actor.ActorRef;
-import akka.actor.UntypedAbstractActor;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.UntypedAbstractActor;
public final class ForwardingActor extends UntypedAbstractActor {
private final ActorRef target;
*/
package org.opendaylight.controller.cluster.datastore.utils;
-import akka.actor.ActorRef;
-import akka.actor.Address;
-import akka.actor.AddressFromURIString;
-import akka.cluster.ClusterEvent.MemberRemoved;
-import akka.cluster.ClusterEvent.MemberUp;
-import akka.cluster.ClusterEvent.ReachableMember;
-import akka.cluster.ClusterEvent.UnreachableMember;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
-import akka.cluster.UniqueAddress;
-import akka.util.Version;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.AddressFromURIString;
+import org.apache.pekko.cluster.ClusterEvent.MemberRemoved;
+import org.apache.pekko.cluster.ClusterEvent.MemberUp;
+import org.apache.pekko.cluster.ClusterEvent.ReachableMember;
+import org.apache.pekko.cluster.ClusterEvent.UnreachableMember;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
+import org.apache.pekko.cluster.UniqueAddress;
+import org.apache.pekko.util.Version;
import org.opendaylight.controller.cluster.access.concepts.MemberName;
import org.opendaylight.controller.cluster.datastore.ClusterWrapper;
import scala.collection.immutable.Set.Set1;
import static org.junit.Assert.assertNotNull;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSelection;
import org.junit.Test;
import org.opendaylight.controller.cluster.datastore.DataStoreVersions;
import org.opendaylight.controller.cluster.datastore.messages.PrimaryShardInfo;
-akka {
+pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
- loggers = ["akka.testkit.TestEventListener", "akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.testkit.TestEventListener", "org.apache.pekko.event.slf4j.Slf4jLogger"]
actor {
serializers {
- java = "akka.serialization.JavaSerializer"
- proto = "akka.remote.serialization.ProtobufSerializer"
+ java = "org.apache.pekko.serialization.JavaSerializer"
+ proto = "org.apache.pekko.remote.serialization.ProtobufSerializer"
}
serialization-bindings {
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.datastore.utils.InMemorySnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
bounded-mailbox {
-akka {
+pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
coordinated-shutdown.run-by-actor-system-terminate = off
class = "org.opendaylight.controller.cluster.raft.utils.InMemoryJournal"
}
- loggers = ["akka.testkit.TestEventListener", "akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.testkit.TestEventListener", "org.apache.pekko.event.slf4j.Slf4jLogger"]
actor {
warn-about-java-serializer-usage = false
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
bounded-mailbox {
in-memory-snapshot-store {
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
shard-dispatcher {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
coordinated-shutdown.run-by-actor-system-terminate = off
loglevel = "INFO"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
in-memory-snapshot-store {
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
shard-dispatcher {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
coordinated-shutdown.run-by-actor-system-terminate = off
loglevel = "INFO"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
in-memory-snapshot-store {
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
shard-dispatcher {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
coordinated-shutdown.run-by-actor-system-terminate = off
}
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
in-memory-snapshot-store {
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
shard-dispatcher {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
coordinated-shutdown.run-by-actor-system-terminate = off
loglevel = "INFO"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
in-memory-snapshot-store {
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
shard-dispatcher {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
coordinated-shutdown.run-by-actor-system-terminate = off
loglevel = "INFO"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
in-memory-snapshot-store {
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
shard-dispatcher {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
coordinated-shutdown.run-by-actor-system-terminate = off
loglevel = "INFO"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
in-memory-snapshot-store {
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
shard-dispatcher {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
coordinated-shutdown.run-by-actor-system-terminate = off
loglevel = "INFO"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
}
Member1-without-artery {
- akka.remote.artery.enabled = off
+ pekko.remote.artery.enabled = off
}
Member2-without-artery {
- akka.remote.artery.enabled = off
+ pekko.remote.artery.enabled = off
}
Member3-without-artery {
- akka.remote.artery.enabled = off
+ pekko.remote.artery.enabled = off
}
Member4-without-artery {
- akka.remote.artery.enabled = off
+ pekko.remote.artery.enabled = off
}
Member5-without-artery {
- akka.remote.artery.enabled = off
+ pekko.remote.artery.enabled = off
}
in-memory-snapshot-store {
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
shard-dispatcher {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.UnboundedDequeBasedControlAwareMailbox"
}
- akka {
+ pekko {
persistence {
snapshot-store.plugin = "in-memory-snapshot-store"
journal {
- plugin = "akka.persistence.journal.segmented-file"
+ plugin = "pekko.persistence.journal.segmented-file"
segmented-file {
class = "org.opendaylight.controller.akka.segjournal.SegmentedFileJournal"
loglevel = "INFO"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
readylocal = "org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransactionSerializer"
<!-- Test Dependencies -->
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<dependency>
*/
package org.opendaylight.controller.dummy.datastore;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
import com.google.common.base.Stopwatch;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
import org.opendaylight.controller.cluster.datastore.DataStoreVersions;
import org.opendaylight.controller.cluster.raft.ReplicatedLogEntry;
import org.opendaylight.controller.cluster.raft.messages.AppendEntries;
*/
package org.opendaylight.controller.dummy.datastore;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
public final class DummyShardManager extends UntypedAbstractActor {
public DummyShardManager(final Configuration configuration, final String memberName, final String[] shardNames,
package org.opendaylight.controller.dummy.datastore;
-import akka.actor.ActorSystem;
import com.typesafe.config.ConfigFactory;
+import org.apache.pekko.actor.ActorSystem;
import org.kohsuke.args4j.CmdLineException;
import org.kohsuke.args4j.CmdLineParser;
import org.kohsuke.args4j.Option;
metric-capture-enabled = true
- akka {
+ pekko {
loglevel = "INFO"
- loggers = ["akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.event.slf4j.Slf4jLogger"]
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
- java = "akka.serialization.JavaSerializer"
- proto = "akka.remote.serialization.ProtobufSerializer"
+ java = "org.apache.pekko.serialization.JavaSerializer"
+ proto = "org.apache.pekko.remote.serialization.ProtobufSerializer"
}
serialization-bindings {
}
cluster {
- seed-nodes = ["akka://opendaylight-cluster-data@127.0.0.1:2550", "akka://opendaylight-cluster-data@127.0.0.1:2553"]
+ seed-nodes = ["pekko://opendaylight-cluster-data@127.0.0.1:2550", "pekko://opendaylight-cluster-data@127.0.0.1:2553"]
roles = [
"member-2"
metric-capture-enabled = true
- akka {
+ pekko {
loglevel = "INFO"
- loggers = ["akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.event.slf4j.Slf4jLogger"]
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
serializers {
- java = "akka.serialization.JavaSerializer"
- proto = "akka.remote.serialization.ProtobufSerializer"
+ java = "org.apache.pekko.serialization.JavaSerializer"
+ proto = "org.apache.pekko.remote.serialization.ProtobufSerializer"
}
serialization-bindings {
}
cluster {
- seed-nodes = ["akka://opendaylight-cluster-data@127.0.0.1:2550", "akka://opendaylight-cluster-data@127.0.0.1:2554"]
+ seed-nodes = ["pekko://opendaylight-cluster-data@127.0.0.1:2550", "pekko://opendaylight-cluster-data@127.0.0.1:2554"]
roles = [
"member-3"
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
</dependency>
<dependency>
<groupId>org.opendaylight.controller</groupId>
<!-- Test Dependencies -->
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-testkit_2.13</artifactId>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-testkit_2.13</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
import static java.util.Objects.requireNonNull;
-import akka.dispatch.OnComplete;
import com.google.common.util.concurrent.AbstractFuture;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.dispatch.OnComplete;
import org.eclipse.jdt.annotation.NonNull;
import org.eclipse.jdt.annotation.Nullable;
import org.slf4j.Logger;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.remote.rpc.messages.AbstractExecute;
import scala.concurrent.Future;
*/
package org.opendaylight.controller.remote.rpc;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.PoisonPill;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.PoisonPill;
import org.opendaylight.controller.cluster.ActorSystemProvider;
import org.opendaylight.mdsal.dom.api.DOMActionProviderService;
import org.opendaylight.mdsal.dom.api.DOMActionService;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Props;
-import akka.actor.Status.Failure;
import com.google.common.base.Throwables;
import com.google.common.util.concurrent.FutureCallback;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.MoreExecutors;
import java.util.Collection;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.Status.Failure;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActor;
import org.opendaylight.controller.remote.rpc.messages.ActionResponse;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import java.util.Collection;
import java.util.Set;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.remote.rpc.registry.ActionRegistry;
import org.opendaylight.controller.remote.rpc.registry.RpcRegistry;
import org.opendaylight.controller.remote.rpc.registry.RpcRegistry.Messages.AddOrUpdateRoutes;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.OneForOneStrategy;
-import akka.actor.Props;
-import akka.actor.SupervisorStrategy;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.OneForOneStrategy;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.SupervisorStrategy;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActor;
import org.opendaylight.controller.remote.rpc.registry.ActionRegistry;
import org.opendaylight.controller.remote.rpc.registry.RpcRegistry;
import static java.util.Objects.requireNonNull;
-import akka.actor.Address;
-import akka.actor.Props;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Optional;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActor;
import org.opendaylight.controller.remote.rpc.registry.ActionRegistry.Messages.UpdateRemoteActionEndpoints;
import org.opendaylight.controller.remote.rpc.registry.ActionRegistry.RemoteActionEndpoint;
*/
package org.opendaylight.controller.remote.rpc;
-import akka.actor.ActorRef;
import com.google.common.util.concurrent.ListenableFuture;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.remote.rpc.messages.ExecuteAction;
import org.opendaylight.mdsal.dom.api.DOMActionImplementation;
import org.opendaylight.mdsal.dom.api.DOMDataTreeIdentifier;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.PoisonPill;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.PoisonPill;
import org.opendaylight.mdsal.dom.api.DOMActionProviderService;
import org.opendaylight.mdsal.dom.api.DOMActionService;
import org.opendaylight.mdsal.dom.api.DOMRpcProviderService;
*/
package org.opendaylight.controller.remote.rpc;
-import akka.util.Timeout;
import com.typesafe.config.Config;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.cluster.common.actor.CommonConfig;
import scala.concurrent.duration.FiniteDuration;
*/
package org.opendaylight.controller.remote.rpc;
-import akka.actor.ActorSystem;
+import org.apache.pekko.actor.ActorSystem;
import org.opendaylight.mdsal.dom.api.DOMActionProviderService;
import org.opendaylight.mdsal.dom.api.DOMActionService;
import org.opendaylight.mdsal.dom.api.DOMRpcProviderService;
*/
package org.opendaylight.controller.remote.rpc;
-import akka.actor.ActorRef;
import com.google.common.util.concurrent.ListenableFuture;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.controller.remote.rpc.messages.ExecuteRpc;
import org.opendaylight.mdsal.dom.api.DOMRpcIdentifier;
import org.opendaylight.mdsal.dom.api.DOMRpcImplementation;
*/
package org.opendaylight.controller.remote.rpc;
-import akka.actor.Terminated;
-import akka.actor.UntypedAbstractActor;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.actor.UntypedAbstractActor;
import org.opendaylight.controller.cluster.common.actor.Monitor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.MoreObjects;
import com.google.common.collect.ImmutableSet;
import java.io.Serializable;
import java.util.Collection;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.remote.rpc.registry.gossip.BucketData;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Address;
-import akka.actor.Props;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.remote.rpc.RemoteOpsProviderConfig;
import org.opendaylight.controller.remote.rpc.registry.gossip.Bucket;
import org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess;
*/
package org.opendaylight.controller.remote.rpc.registry;
-import akka.actor.ActorRef;
-import akka.serialization.JavaSerializer;
-import akka.serialization.Serialization;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.io.Externalizable;
import java.io.IOException;
import java.util.Collection;
import java.util.HashSet;
import java.util.Set;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.serialization.Serialization;
import org.opendaylight.mdsal.common.api.LogicalDatastoreType;
import org.opendaylight.mdsal.dom.api.DOMActionInstance;
import org.opendaylight.yangtools.yang.data.api.YangInstanceIdentifier;
*/
package org.opendaylight.controller.remote.rpc.registry;
-import akka.actor.ActorRef;
-import akka.serialization.JavaSerializer;
-import akka.serialization.Serialization;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.io.Externalizable;
import java.io.IOException;
import java.util.Collection;
import java.util.HashSet;
import java.util.Set;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.serialization.JavaSerializer;
+import org.apache.pekko.serialization.Serialization;
import org.opendaylight.mdsal.dom.api.DOMRpcIdentifier;
import org.opendaylight.yangtools.yang.data.codec.binfmt.NormalizedNodeDataInput;
import org.opendaylight.yangtools.yang.data.codec.binfmt.NormalizedNodeDataOutput;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.Address;
-import akka.actor.Props;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import java.util.Map.Entry;
import java.util.Optional;
import java.util.Set;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Props;
import org.opendaylight.controller.remote.rpc.RemoteOpsProviderConfig;
import org.opendaylight.controller.remote.rpc.registry.RpcRegistry.Messages.AddOrUpdateRoutes;
import org.opendaylight.controller.remote.rpc.registry.RpcRegistry.Messages.RemoveRoutes;
*/
package org.opendaylight.controller.remote.rpc.registry.gossip;
-import akka.actor.ActorRef;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.eclipse.jdt.annotation.NonNull;
public interface Bucket<T extends BucketData<T>> {
*/
package org.opendaylight.controller.remote.rpc.registry.gossip;
-import akka.actor.ActorRef;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
import org.opendaylight.yangtools.concepts.Immutable;
/**
import static org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreActor.removeBucketMessage;
import static org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreActor.updateRemoteBucketsMessage;
-import akka.actor.ActorRef;
-import akka.actor.Address;
-import akka.dispatch.OnComplete;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
import com.google.common.annotations.VisibleForTesting;
import java.util.Collection;
import java.util.Map;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.pattern.Patterns;
+import org.apache.pekko.util.Timeout;
import scala.concurrent.ExecutionContext;
import scala.concurrent.Future;
import static org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess.Singletons.GET_ALL_BUCKETS;
import static org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess.Singletons.GET_BUCKET_VERSIONS;
-import akka.actor.ActorRef;
-import akka.actor.ActorRefProvider;
-import akka.actor.Address;
-import akka.actor.PoisonPill;
-import akka.actor.Terminated;
-import akka.cluster.ClusterActorRefProvider;
-import akka.persistence.DeleteSnapshotsFailure;
-import akka.persistence.DeleteSnapshotsSuccess;
-import akka.persistence.RecoveryCompleted;
-import akka.persistence.SaveSnapshotFailure;
-import akka.persistence.SaveSnapshotSuccess;
-import akka.persistence.SnapshotOffer;
-import akka.persistence.SnapshotSelectionCriteria;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.HashMultimap;
import com.google.common.collect.ImmutableMap;
import java.util.Map;
import java.util.Optional;
import java.util.function.Consumer;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorRefProvider;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.PoisonPill;
+import org.apache.pekko.actor.Terminated;
+import org.apache.pekko.cluster.ClusterActorRefProvider;
+import org.apache.pekko.persistence.DeleteSnapshotsFailure;
+import org.apache.pekko.persistence.DeleteSnapshotsSuccess;
+import org.apache.pekko.persistence.RecoveryCompleted;
+import org.apache.pekko.persistence.SaveSnapshotFailure;
+import org.apache.pekko.persistence.SaveSnapshotSuccess;
+import org.apache.pekko.persistence.SnapshotOffer;
+import org.apache.pekko.persistence.SnapshotSelectionCriteria;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedPersistentActorWithMetering;
import org.opendaylight.controller.remote.rpc.RemoteOpsProviderConfig;
import static java.util.Objects.requireNonNull;
-import akka.actor.Address;
import com.google.common.collect.ImmutableMap;
import java.io.Serializable;
import java.util.Map;
+import org.apache.pekko.actor.Address;
final class GossipEnvelope implements Serializable {
private static final long serialVersionUID = 1L;
*/
package org.opendaylight.controller.remote.rpc.registry.gossip;
-import akka.actor.Address;
import com.google.common.collect.ImmutableMap;
import java.io.Serializable;
import java.util.Map;
+import org.apache.pekko.actor.Address;
final class GossipStatus implements Serializable {
private static final long serialVersionUID = 1L;
import static com.google.common.base.Verify.verifyNotNull;
import static java.util.Objects.requireNonNull;
-import akka.actor.ActorRef;
-import akka.actor.ActorRefProvider;
-import akka.actor.ActorSelection;
-import akka.actor.Address;
-import akka.actor.Cancellable;
-import akka.actor.Props;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterActorRefProvider;
-import akka.cluster.ClusterEvent;
-import akka.cluster.Member;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.Maps;
import java.util.ArrayList;
import java.util.Set;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorRefProvider;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Cancellable;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterActorRefProvider;
+import org.apache.pekko.cluster.ClusterEvent;
+import org.apache.pekko.cluster.Member;
import org.opendaylight.controller.cluster.common.actor.AbstractUntypedActorWithMetering;
import org.opendaylight.controller.remote.rpc.RemoteOpsProviderConfig;
import scala.concurrent.duration.FiniteDuration;
import static java.util.Objects.requireNonNull;
-import akka.actor.Address;
-import akka.util.Timeout;
import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import java.util.Map;
import java.util.concurrent.TimeoutException;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.util.Timeout;
import org.eclipse.jdt.annotation.NonNull;
import org.opendaylight.controller.md.sal.common.util.jmx.AbstractMXBean;
import org.opendaylight.controller.remote.rpc.registry.AbstractRoutingTable;
*/
package org.opendaylight.controller.remote.rpc.registry.mbeans;
-import akka.actor.Address;
-import akka.util.Timeout;
import java.util.Collection;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.remote.rpc.registry.ActionRoutingTable;
import org.opendaylight.controller.remote.rpc.registry.gossip.Bucket;
import org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess;
*/
package org.opendaylight.controller.remote.rpc.registry.mbeans;
-import akka.actor.Address;
-import akka.util.Timeout;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.util.Timeout;
import org.opendaylight.controller.remote.rpc.registry.RoutingTable;
import org.opendaylight.controller.remote.rpc.registry.gossip.Bucket;
import org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.javadsl.TestKit;
import java.net.URI;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.when;
-import akka.actor.Status.Failure;
import java.time.Duration;
+import org.apache.pekko.actor.Status.Failure;
import org.junit.Test;
import org.opendaylight.controller.remote.rpc.messages.ExecuteRpc;
import org.opendaylight.controller.remote.rpc.messages.RpcResponse;
*/
package org.opendaylight.controller.remote.rpc;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
import java.util.Collections;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.Props;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.collect.ImmutableMap;
import java.util.Collections;
import java.util.Map;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.actor.UntypedAbstractActor;
-import akka.testkit.TestActorRef;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.actor.UntypedAbstractActor;
+import org.apache.pekko.testkit.TestActorRef;
import org.junit.Test;
import org.opendaylight.controller.cluster.common.actor.AkkaConfigurationReader;
import scala.concurrent.duration.FiniteDuration;
import static org.mockito.MockitoAnnotations.initMocks;
-import akka.actor.ActorSystem;
+import org.apache.pekko.actor.ActorSystem;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.mock;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.testkit.javadsl.TestKit;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import static org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess.Singletons.GET_ALL_BUCKETS;
import static org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess.Singletons.GET_BUCKET_VERSIONS;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterEvent.CurrentClusterState;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
-import akka.cluster.UniqueAddress;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Stopwatch;
import com.google.common.collect.Sets;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterEvent.CurrentClusterState;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
+import org.apache.pekko.cluster.UniqueAddress;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import static org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess.Singletons.GET_ALL_BUCKETS;
import static org.opendaylight.controller.remote.rpc.registry.gossip.BucketStoreAccess.Singletons.GET_BUCKET_VERSIONS;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.cluster.Cluster;
-import akka.cluster.ClusterEvent.CurrentClusterState;
-import akka.cluster.Member;
-import akka.cluster.MemberStatus;
-import akka.cluster.UniqueAddress;
-import akka.testkit.javadsl.TestKit;
import com.google.common.base.Stopwatch;
import com.google.common.collect.Sets;
import com.google.common.util.concurrent.Uninterruptibles;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.cluster.Cluster;
+import org.apache.pekko.cluster.ClusterEvent.CurrentClusterState;
+import org.apache.pekko.cluster.Member;
+import org.apache.pekko.cluster.MemberStatus;
+import org.apache.pekko.cluster.UniqueAddress;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
*/
package org.opendaylight.controller.remote.rpc.registry.gossip;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.Props;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.google.common.collect.ImmutableMap;
import com.typesafe.config.ConfigFactory;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.Assert;
import org.junit.BeforeClass;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.actor.Address;
-import akka.actor.Props;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
import java.util.Map;
+import org.apache.pekko.actor.ActorSelection;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Address;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
*/
package org.opendaylight.controller.remote.rpc.registry.mbeans;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.dispatch.Dispatchers;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.collect.Lists;
import com.typesafe.config.ConfigFactory;
import java.util.Collections;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.dispatch.Dispatchers;
-import akka.testkit.TestActorRef;
-import akka.testkit.javadsl.TestKit;
-import akka.util.Timeout;
import com.google.common.collect.Lists;
import com.typesafe.config.ConfigFactory;
import java.util.Collections;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.actor.ActorSystem;
+import org.apache.pekko.actor.Props;
+import org.apache.pekko.dispatch.Dispatchers;
+import org.apache.pekko.testkit.TestActorRef;
+import org.apache.pekko.testkit.javadsl.TestKit;
+import org.apache.pekko.util.Timeout;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
mailbox-push-timeout-time = 10ms
}
- akka {
+ pekko {
loglevel = "INFO"
#log-config-on-start = on
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
debug{
#autoreceive = on
#lifecycle = on
}
cluster {
- seed-nodes = ["akka://opendaylight-rpc@127.0.0.1:2550"]
+ seed-nodes = ["pekko://opendaylight-rpc@127.0.0.1:2550"]
}
}
}
unit-test {
- akka {
+ pekko {
loglevel = "DEBUG"
- #loggers = ["akka.event.slf4j.Slf4jLogger"]
+ #loggers = ["org.apache.pekko.event.slf4j.Slf4jLogger"]
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
}
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
}
mailbox-capacity = 1000
mailbox-push-timeout-time = 10ms
}
- akka {
+ pekko {
loglevel = "INFO"
- loggers = ["akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.event.slf4j.Slf4jLogger"]
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
debug {
#lifecycle = on
}
}
cluster {
- seed-nodes = ["akka://opendaylight-rpc@127.0.0.1:2551"]
+ seed-nodes = ["pekko://opendaylight-rpc@127.0.0.1:2551"]
}
}
in-memory-journal {
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
}
memberB {
mailbox-capacity = 1000
mailbox-push-timeout-time = 10ms
}
- akka {
+ pekko {
loglevel = "INFO"
- loggers = ["akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.event.slf4j.Slf4jLogger"]
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
debug {
#lifecycle = on
}
}
cluster {
- seed-nodes = ["akka://opendaylight-rpc@127.0.0.1:2551"]
+ seed-nodes = ["pekko://opendaylight-rpc@127.0.0.1:2551"]
}
}
in-memory-journal {
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
}
memberC {
mailbox-capacity = 1000
mailbox-push-timeout-time = 10ms
}
- akka {
+ pekko {
loglevel = "INFO"
- loggers = ["akka.event.slf4j.Slf4jLogger"]
+ loggers = ["org.apache.pekko.event.slf4j.Slf4jLogger"]
persistence.snapshot-store.plugin = "in-memory-snapshot-store"
persistence.journal.plugin = "in-memory-journal"
actor {
- provider = "akka.cluster.ClusterActorRefProvider"
+ provider = "org.apache.pekko.cluster.ClusterActorRefProvider"
debug {
#lifecycle = on
}
}
cluster {
- seed-nodes = ["akka://opendaylight-rpc@127.0.0.1:2551"]
+ seed-nodes = ["pekko://opendaylight-rpc@127.0.0.1:2551"]
}
}
in-memory-journal {
# Class name of the plugin.
class = "org.opendaylight.controller.cluster.raft.utils.InMemorySnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
}
}
*/
package org.opendaylight.controller.clustering.it.provider;
-import akka.actor.ActorRef;
-import akka.dispatch.Futures;
-import akka.dispatch.OnComplete;
-import akka.pattern.Patterns;
import com.google.common.base.Strings;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.SettableFuture;
import javax.annotation.PreDestroy;
import javax.inject.Inject;
import javax.inject.Singleton;
+import org.apache.pekko.actor.ActorRef;
+import org.apache.pekko.dispatch.Futures;
+import org.apache.pekko.dispatch.OnComplete;
+import org.apache.pekko.pattern.Patterns;
import org.opendaylight.controller.cluster.datastore.DistributedDataStoreInterface;
import org.opendaylight.controller.cluster.datastore.utils.ActorUtils;
import org.opendaylight.controller.cluster.raft.client.messages.Shutdown;
</parent>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>akka-aggregator</artifactId>
+ <artifactId>pekko-aggregator</artifactId>
<version>11.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
</properties>
<modules>
- <module>repackaged-akka-jar</module>
- <module>repackaged-akka</module>
+ <module>repackaged-pekko-jar</module>
+ <module>repackaged-pekko</module>
</modules>
</project>
</parent>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka-jar</artifactId>
+ <artifactId>repackaged-pekko-jar</artifactId>
<packaging>jar</packaging>
<version>11.0.0-SNAPSHOT</version>
<name>${project.artifactId}</name>
<dependencies>
<!-- Note: when bumping versions, make sure to update configurations in src/main/resources -->
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-actor_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-actor_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-actor-typed_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-actor-typed_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-cluster_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-cluster_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-cluster-typed_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-cluster-typed_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-osgi_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-osgi_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-persistence_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-persistence_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-protobuf_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-protobuf_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-remote_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-remote_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-slf4j_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-slf4j_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
<dependency>
- <groupId>com.typesafe.akka</groupId>
- <artifactId>akka-stream_2.13</artifactId>
- <version>2.6.21</version>
+ <groupId>org.apache.pekko</groupId>
+ <artifactId>pekko-stream_2.13</artifactId>
+ <version>1.0.2</version>
</dependency>
</dependencies>
<promoteTransitiveDependencies>true</promoteTransitiveDependencies>
<artifactSet>
<includes>
- <include>com.typesafe.akka</include>
+ <include>org.apache.pekko</include>
</includes>
</artifactSet>
<filters>
<filter>
- <artifact>com.typesafe.akka:*</artifact>
+ <artifact>org.apache.pekko:*</artifact>
<excludes>
<exclude>META-INF/MANIFEST.MF</exclude>
<exclude>reference.conf</exclude>
--- /dev/null
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+---------------
+
+pekko-actor contains MurmurHash.scala which is derived from MurmurHash3,
+written by Austin Appleby. He has placed his code in the public domain.
+The author has disclaimed copyright to that source code.
+MurmurHash.scala also contains changes made by the Scala-Lang team under an Apache 2.0 license.
+Copyright (c) 2003-2011, LAMP/EPFL
+
+---------------
+
+pekko-actor contains code from scala-collection-compat in the `org.apache.pekko.util.ccompat` package
+which has released under an Apache 2.0 license.
+- actor/src/main/scala-2.12/org/apache/pekko/util/ccompat/package.scala
+
+Scala (https://www.scala-lang.org)
+
+Copyright EPFL and Lightbend, Inc.
+
+---------------
+
+pekko-actor contains code from scala-library in the `org.apache.pekko.util.ccompat` package
+and in `org.apache.pekko.util.Helpers.scala` which was released under an Apache 2.0 license.
+- actor/src/main/scala-2.12/org/apache/pekko/util/ccompat/package.scala
+- actor/src/main/scala/org/apache/pekko/util/Helpers.scala
+
+Scala (https://www.scala-lang.org)
+
+Copyright EPFL and Lightbend, Inc.
+
+---------------
+
+pekko-actor contains code from Netty in `org.apache.pekko.io.dns.DnsSettings.scala`
+which was released under an Apache 2.0 license.
+Copyright 2014 The Netty Project
+
+---------------
+
+pekko-actor contains code from java-uuid-generator <https://github.com/cowtowncoder/java-uuid-generator>
+in `org.apache.pekko.util.UUIDComparator.scala` which was released under an Apache 2.0 license.
+
+---------------
+
+pekko-actor contains code in `org.apache.pekko.dispatch.AbstractNodeQueue.java` and in
+`org.apache.pekko.dispatch.AbstractBoundedNodeQueue.java` which was based on
+code from https://www.1024cores.net/home/lock-free-algorithms/queues/non-intrusive-mpsc-node-based-queue which
+was released under the Simplified BSD license.
+
+Copyright (c) 2010-2011 Dmitry Vyukov. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, are permitted provided that
+the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice, this list of
+
+ conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright notice, this list
+
+ of conditions and the following disclaimer in the documentation and/or other materials
+
+ provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY DMITRY VYUKOV "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+IN NO EVENT SHALL DMITRY VYUKOV OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
+WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+The views and conclusions contained in the software and documentation are those of the authors and should not
+be interpreted as representing official policies, either expressed or implied, of Dmitry Vyukov.
+
+---------------
+
+pekko-actor contains code in `org.apache.pekko.dispatch.AbstractBoundedNodeQueue.java` which was based on
+code from actors <https://github.com/plokhotnyuk/actors> which was released under the Apache 2.0 license.
+
+---------------
+
+pekko-actor contains code in `org.apache.pekko.util.FrequencySketch.scala` which was based on
+code from hash-prospector <https://github.com/skeeto/hash-prospector> which has been placed
+in the public domain.
+
+This is free and unencumbered software released into the public domain.
+
+Anyone is free to copy, modify, publish, use, compile, sell, or
+distribute this software, either in source code form or as a compiled
+binary, for any purpose, commercial or non-commercial, and by any
+means.
+
+In jurisdictions that recognize copyright laws, the author or authors
+of this software dedicate any and all copyright interest in the
+software to the public domain. We make this dedication for the benefit
+of the public at large and to the detriment of our heirs and
+successors. We intend this dedication to be an overt act of
+relinquishment in perpetuity of all present and future rights to this
+software under copyright law.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+OTHER DEALINGS IN THE SOFTWARE.
+
+For more information, please refer to <http://unlicense.org/>
+
+---------------
+
+pekko-actor contains code in `org.apache.pekko.util.FrequencySketch.scala` which was based on code from
+Caffeine <https://github.com/ben-manes/caffeine> which was developed under the Apache 2.0 license.
+Copyright 2015 Ben Manes. All Rights Reserved.
+
+---------------
+
+pekko-cluster contains VectorClock.scala which is derived from code written
+by Coda Hale <https://github.com/codahale/vlock>.
+He has agreed to allow us to use this code under an Apache 2.0 license
+<https://github.com/apache/pekko/issues/232#issuecomment-1465281263>.
+
+---------------
+
+pekko-distributed-data and pekko-persistence-typed contain `ORSet.scala`
+which is derived from code written for the Riak DT project
+<https://github.com/basho/riak_dt/blob/develop/src/riak_dt_orswot.erl>.
+This code is licensed under an Apache 2.0 license.
+- distributed-data/src/main/scala/org/apache/pekko/cluster/ddata/ORSet.scala
+- persistence-typed/src/main/scala/org/apache/pekko/persistence/typed/crdt/ORSet.scala
+Copyright (c) 2007-2013 Basho Technologies, Inc. All Rights Reserved.
+
+---------------
+
+pekko-remote contains CountMinSketch.java which contains code derived from MurmurHash3,
+written by Austin Appleby. He has placed his code in the public domain.
+The author has disclaimed copyright to that source code.
+CountMinSketch.java also contains additional code developed under an Apache 2.0 license.
+Copyright 2016 AddThis
+
+---------------
+
+pekko-remote contains code from Aeron <https://github.com/real-logic/aeron>.
+
+./remote/src/test/java/org/apache/pekko/remote/artery/aeron/AeronStat.java
+./remote/src/test/java/org/apache/pekko/remote/artery/RateReporter.java
+./remote/src/main/java/org/apache/pekko/remote/artery/aeron/AeronErrorLog.java
+
+This code was released under an Apache 2.0 license.
+Copyright 2014 - 2016 Real Logic Ltd.
+
+---------------
+
+pekko-persistence-typed contains AuctionEntity.java in its test source.
+This code is derived from a class in Lagom <https://github.com/lagom>,
+licensed under the Apache 2.0 license.
+Copyright 2016 Lightbend Inc. [http://www.lightbend.com]
+
+---------------
+
+remote/src/test/resources/ssl/ contains shell scripts that are based on some
+in Play Samples <https://github.com/playframework/play-samples/>.
+The original scripts were released under a CC0 1.0 Universal license.
+
+CC0 1.0 Universal
+
+Statement of Purpose
+
+The laws of most jurisdictions throughout the world automatically confer
+exclusive Copyright and Related Rights (defined below) upon the creator and
+subsequent owner(s) (each and all, an "owner") of an original work of
+authorship and/or a database (each, a "Work").
+
+Certain owners wish to permanently relinquish those rights to a Work for the
+purpose of contributing to a commons of creative, cultural and scientific
+works ("Commons") that the public can reliably and without fear of later
+claims of infringement build upon, modify, incorporate in other works, reuse
+and redistribute as freely as possible in any form whatsoever and for any
+purposes, including without limitation commercial purposes. These owners may
+contribute to the Commons to promote the ideal of a free culture and the
+further production of creative, cultural and scientific works, or to gain
+reputation or greater distribution for their Work in part through the use and
+efforts of others.
+
+For these and/or other purposes and motivations, and without any expectation
+of additional consideration or compensation, the person associating CC0 with a
+Work (the "Affirmer"), to the extent that he or she is an owner of Copyright
+and Related Rights in the Work, voluntarily elects to apply CC0 to the Work
+and publicly distribute the Work under its terms, with knowledge of his or her
+Copyright and Related Rights in the Work and the meaning and intended legal
+effect of CC0 on those rights.
+
+1. Copyright and Related Rights. A Work made available under CC0 may be
+protected by copyright and related or neighboring rights ("Copyright and
+Related Rights"). Copyright and Related Rights include, but are not limited
+to, the following:
+
+ i. the right to reproduce, adapt, distribute, perform, display, communicate,
+ and translate a Work;
+
+ ii. moral rights retained by the original author(s) and/or performer(s);
+
+ iii. publicity and privacy rights pertaining to a person's image or likeness
+ depicted in a Work;
+
+ iv. rights protecting against unfair competition in regards to a Work,
+ subject to the limitations in paragraph 4(a), below;
+
+ v. rights protecting the extraction, dissemination, use and reuse of data in
+ a Work;
+
+ vi. database rights (such as those arising under Directive 96/9/EC of the
+ European Parliament and of the Council of 11 March 1996 on the legal
+ protection of databases, and under any national implementation thereof,
+ including any amended or successor version of such directive); and
+
+ vii. other similar, equivalent or corresponding rights throughout the world
+ based on applicable law or treaty, and any national implementations thereof.
+
+2. Waiver. To the greatest extent permitted by, but not in contravention of,
+applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and
+unconditionally waives, abandons, and surrenders all of Affirmer's Copyright
+and Related Rights and associated claims and causes of action, whether now
+known or unknown (including existing as well as future claims and causes of
+action), in the Work (i) in all territories worldwide, (ii) for the maximum
+duration provided by applicable law or treaty (including future time
+extensions), (iii) in any current or future medium and for any number of
+copies, and (iv) for any purpose whatsoever, including without limitation
+commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes
+the Waiver for the benefit of each member of the public at large and to the
+detriment of Affirmer's heirs and successors, fully intending that such Waiver
+shall not be subject to revocation, rescission, cancellation, termination, or
+any other legal or equitable action to disrupt the quiet enjoyment of the Work
+by the public as contemplated by Affirmer's express Statement of Purpose.
+
+3. Public License Fallback. Should any part of the Waiver for any reason be
+judged legally invalid or ineffective under applicable law, then the Waiver
+shall be preserved to the maximum extent permitted taking into account
+Affirmer's express Statement of Purpose. In addition, to the extent the Waiver
+is so judged Affirmer hereby grants to each affected person a royalty-free,
+non transferable, non sublicensable, non exclusive, irrevocable and
+unconditional license to exercise Affirmer's Copyright and Related Rights in
+the Work (i) in all territories worldwide, (ii) for the maximum duration
+provided by applicable law or treaty (including future time extensions), (iii)
+in any current or future medium and for any number of copies, and (iv) for any
+purpose whatsoever, including without limitation commercial, advertising or
+promotional purposes (the "License"). The License shall be deemed effective as
+of the date CC0 was applied by Affirmer to the Work. Should any part of the
+License for any reason be judged legally invalid or ineffective under
+applicable law, such partial invalidity or ineffectiveness shall not
+invalidate the remainder of the License, and in such case Affirmer hereby
+affirms that he or she will not (i) exercise any of his or her remaining
+Copyright and Related Rights in the Work or (ii) assert any associated claims
+and causes of action with respect to the Work, in either case contrary to
+Affirmer's express Statement of Purpose.
+
+4. Limitations and Disclaimers.
+
+ a. No trademark or patent rights held by Affirmer are waived, abandoned,
+ surrendered, licensed or otherwise affected by this document.
+
+ b. Affirmer offers the Work as-is and makes no representations or warranties
+ of any kind concerning the Work, express, implied, statutory or otherwise,
+ including without limitation warranties of title, merchantability, fitness
+ for a particular purpose, non infringement, or the absence of latent or
+ other defects, accuracy, or the present or absence of errors, whether or not
+ discoverable, all to the greatest extent permissible under applicable law.
+
+ c. Affirmer disclaims responsibility for clearing rights of other persons
+ that may apply to the Work or any use thereof, including without limitation
+ any person's Copyright and Related Rights in the Work. Further, Affirmer
+ disclaims responsibility for obtaining any necessary consents, permissions
+ or other rights required for any use of the Work.
+
+ d. Affirmer understands and acknowledges that Creative Commons is not a
+ party to this document and has no duty or obligation with respect to this
+ CC0 or use of the Work.
+
+For more information, please see
+<http://creativecommons.org/publicdomain/zero/1.0/>
-####################################
-# Akka Actor Reference Config File #
-####################################
+# SPDX-License-Identifier: Apache-2.0
+
+#####################################
+# Pekko Actor Reference Config File #
+########################3############
# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your application.conf.
-# Akka version, checked against the runtime version of Akka. Loaded from generated conf file.
+# Pekko version, checked against the runtime version of Pekko. Loaded from generated conf file.
include "version"
-akka {
- # Home directory of Akka, modules in the deploy directory will be loaded
+pekko {
+ # Home directory of Pekko, modules in the deploy directory will be loaded
home = ""
- # Loggers to register at boot time (akka.event.Logging$DefaultLogger logs
+ # Loggers to register at boot time (org.apache.pekko.event.Logging$DefaultLogger logs
# to STDOUT)
- loggers = ["akka.event.Logging$DefaultLogger"]
+ loggers = ["org.apache.pekko.event.Logging$DefaultLogger"]
# Filter of log events that is used by the LoggingAdapter before
# publishing log events to the eventStream. It can perform
# fine grained filtering based on the log source. The default
# implementation filters on the `loglevel`.
# FQCN of the LoggingFilter. The Class of the FQCN must implement
- # akka.event.LoggingFilter and have a public constructor with
- # (akka.actor.ActorSystem.Settings, akka.event.EventStream) parameters.
- logging-filter = "akka.event.DefaultLoggingFilter"
+ # org.apache.pekko.event.LoggingFilter and have a public constructor with
+ # (org.apache.pekko.actor.ActorSystem.Settings, org.apache.pekko.event.EventStream) parameters.
+ logging-filter = "org.apache.pekko.event.DefaultLoggingFilter"
# Specifies the default loggers dispatcher
- loggers-dispatcher = "akka.actor.default-dispatcher"
+ loggers-dispatcher = "pekko.actor.default-dispatcher"
# Loggers are created and registered synchronously during ActorSystem
# start-up, and since they are actors, this timeout is used to bound the
#
# Should not be set by end user applications in 'application.conf', use the extensions property for that
#
- library-extensions = ${?akka.library-extensions} ["akka.serialization.SerializationExtension$"]
+ library-extensions = ${?pekko.library-extensions} ["org.apache.pekko.serialization.SerializationExtension$"]
# List FQCN of extensions which shall be loaded at actor system startup.
# Should be on the format: 'extensions = ["foo", "bar"]' etc.
- # See the Akka Documentation for more info about Extensions
+ # See the Pekko Documentation for more info about Extensions
extensions = []
# Toggles whether threads created by this ActorSystem should be daemons or not
# such as OutOfMemoryError
jvm-exit-on-fatal-error = on
- # Akka installs JVM shutdown hooks by default, e.g. in CoordinatedShutdown and Artery. This property will
+ # Pekko installs JVM shutdown hooks by default, e.g. in CoordinatedShutdown and Artery. This property will
# not disable user-provided hooks registered using `CoordinatedShutdown#addCancellableJvmShutdownHook`.
- # This property is related to `akka.coordinated-shutdown.run-by-jvm-shutdown-hook` below.
+ # This property is related to `pekko.coordinated-shutdown.run-by-jvm-shutdown-hook` below.
# This property makes it possible to disable all such hooks if the application itself
# or a higher level framework such as Play prefers to install the JVM shutdown hook and
# terminate the ActorSystem itself, with or without using CoordinatedShutdown.
# Either one of "local", "remote" or "cluster" or the
# FQCN of the ActorRefProvider to be used; the below is the built-in default,
- # note that "remote" and "cluster" requires the akka-remote and akka-cluster
+ # note that "remote" and "cluster" requires the pekko-remote and pekko-cluster
# artifacts to be on the classpath.
provider = "local"
# The guardian "/user" will use this class to obtain its supervisorStrategy.
- # It needs to be a subclass of akka.actor.SupervisorStrategyConfigurator.
- # In addition to the default there is akka.actor.StoppingSupervisorStrategy.
- guardian-supervisor-strategy = "akka.actor.DefaultSupervisorStrategy"
+ # It needs to be a subclass of org.apache.pekko.actor.SupervisorStrategyConfigurator.
+ # In addition to the default there is org.apache.pekko.actor.StoppingSupervisorStrategy.
+ guardian-supervisor-strategy = "org.apache.pekko.actor.DefaultSupervisorStrategy"
# Timeout for Extension creation and a few other potentially blocking
# initialization tasks.
# If serialize-messages or serialize-creators are enabled classes that starts with
# a prefix listed here are not verified.
- no-serialization-verification-needed-class-prefix = ["akka."]
+ no-serialization-verification-needed-class-prefix = ["org.apache.pekko."]
# Timeout for send operations to top-level actors which are in the process
# of being started. This is only relevant if using a bounded mailbox or the
# CallingThreadDispatcher for a top-level actor.
unstarted-push-timeout = 10s
- # TypedActor deprecated since 2.6.0.
+ # TypedActor deprecated since Akka 2.6.0.
typed {
- # Default timeout for the deprecated TypedActor (not the new actor APIs in 2.6)
+ # Default timeout for the deprecated TypedActor (not the new actor APIs in Akka 2.6)
# methods with non-void return type.
timeout = 5s
}
# Mapping between ´deployment.router' short names to fully qualified class names
router.type-mapping {
- from-code = "akka.routing.NoRouter"
- round-robin-pool = "akka.routing.RoundRobinPool"
- round-robin-group = "akka.routing.RoundRobinGroup"
- random-pool = "akka.routing.RandomPool"
- random-group = "akka.routing.RandomGroup"
- balancing-pool = "akka.routing.BalancingPool"
- smallest-mailbox-pool = "akka.routing.SmallestMailboxPool"
- broadcast-pool = "akka.routing.BroadcastPool"
- broadcast-group = "akka.routing.BroadcastGroup"
- scatter-gather-pool = "akka.routing.ScatterGatherFirstCompletedPool"
- scatter-gather-group = "akka.routing.ScatterGatherFirstCompletedGroup"
- tail-chopping-pool = "akka.routing.TailChoppingPool"
- tail-chopping-group = "akka.routing.TailChoppingGroup"
- consistent-hashing-pool = "akka.routing.ConsistentHashingPool"
- consistent-hashing-group = "akka.routing.ConsistentHashingGroup"
+ from-code = "org.apache.pekko.routing.NoRouter"
+ round-robin-pool = "org.apache.pekko.routing.RoundRobinPool"
+ round-robin-group = "org.apache.pekko.routing.RoundRobinGroup"
+ random-pool = "org.apache.pekko.routing.RandomPool"
+ random-group = "org.apache.pekko.routing.RandomGroup"
+ balancing-pool = "org.apache.pekko.routing.BalancingPool"
+ smallest-mailbox-pool = "org.apache.pekko.routing.SmallestMailboxPool"
+ broadcast-pool = "org.apache.pekko.routing.BroadcastPool"
+ broadcast-group = "org.apache.pekko.routing.BroadcastGroup"
+ scatter-gather-pool = "org.apache.pekko.routing.ScatterGatherFirstCompletedPool"
+ scatter-gather-group = "org.apache.pekko.routing.ScatterGatherFirstCompletedGroup"
+ tail-chopping-pool = "org.apache.pekko.routing.TailChoppingPool"
+ tail-chopping-group = "org.apache.pekko.routing.TailChoppingGroup"
+ consistent-hashing-pool = "org.apache.pekko.routing.ConsistentHashingPool"
+ consistent-hashing-group = "org.apache.pekko.routing.ConsistentHashingGroup"
}
deployment {
# - available: "from-code", "round-robin", "random", "smallest-mailbox",
# "scatter-gather", "broadcast"
# - or: Fully qualified class name of the router class.
- # The class must extend akka.routing.CustomRouterConfig and
+ # The class must extend org.apache.pekko.routing.CustomRouterConfig and
# have a public constructor with com.typesafe.config.Config
- # and optional akka.actor.DynamicAccess parameter.
+ # and optional org.apache.pekko.actor.DynamicAccess parameter.
# - default is "from-code";
# Whether or not an actor is transformed to a Router is decided in code
# only (Props.withRouter). The type of router can be overridden in the
# Probability of doing an exploration v.s. optimization.
chance-of-exploration = 0.4
- # When downsizing after a long streak of underutilization, the resizer
- # will downsize the pool to the highest utiliziation multiplied by a
+ # When downsizing after a long streak of under-utilization, the resizer
+ # will downsize the pool to the highest utilization multiplied by a
# a downsize ratio. This downsize ratio determines the new pools size
# in comparison to the highest utilization.
# E.g. if the highest utilization is 10, and the down size ratio
}
"/IO-DNS/inet-address/*" {
- dispatcher = "akka.actor.default-blocking-io-dispatcher"
+ dispatcher = "pekko.actor.default-blocking-io-dispatcher"
}
"/IO-DNS/async-dns" {
# Dispatcher, PinnedDispatcher, or a FQCN to a class inheriting
# MessageDispatcherConfigurator with a public constructor with
# both com.typesafe.config.Config parameter and
- # akka.dispatch.DispatcherPrerequisites parameters.
+ # org.apache.pekko.dispatch.DispatcherPrerequisites parameters.
# PinnedDispatcher must be used together with executor=thread-pool-executor.
type = "Dispatcher"
}
# This will be used if you have set "executor = "affinity-pool-executor""
- # Underlying thread pool implementation is akka.dispatch.affinity.AffinityPool.
+ # Underlying thread pool implementation is org.apache.pekko.dispatch.affinity.AffinityPool.
# This executor is classified as "ApiMayChange".
affinity-pool-executor {
# Min number of threads to cap factor-based parallelism number to
# FQCN of the Rejection handler used in the pool.
# Must have an empty public constructor and must
- # implement akka.actor.affinity.RejectionHandlerFactory.
- rejection-handler = "akka.dispatch.affinity.ThrowOnOverflowRejectionHandler"
+ # implement org.apache.pekko.actor.affinity.RejectionHandlerFactory.
+ rejection-handler = "org.apache.pekko.dispatch.affinity.ThrowOnOverflowRejectionHandler"
# Level of CPU time used, on a scale between 1 and 10, during backoff/idle.
# The tradeoff is that to have low latency more CPU time must be used to be
# Level 10 strongly prefer low latency over low CPU consumption.
idle-cpu-level = 5
- # FQCN of the akka.dispatch.affinity.QueueSelectorFactory.
+ # FQCN of the org.apache.pekko.dispatch.affinity.QueueSelectorFactory.
# The Class of the FQCN must have a public constructor with a
# (com.typesafe.config.Config) parameter.
- # A QueueSelectorFactory create instances of akka.dispatch.affinity.QueueSelector,
+ # A QueueSelectorFactory create instances of org.apache.pekko.dispatch.affinity.QueueSelector,
# that is responsible for determining which task queue a Runnable should be enqueued in.
- queue-selector = "akka.dispatch.affinity.FairDistributionHashCache"
+ queue-selector = "org.apache.pekko.dispatch.affinity.FairDistributionHashCache"
- # When using the "akka.dispatch.affinity.FairDistributionHashCache" queue selector
+ # When using the "org.apache.pekko.dispatch.affinity.FairDistributionHashCache" queue selector
# internally the AffinityPool uses two methods to determine which task
# queue to allocate a Runnable to:
# - map based - maintains a round robin counter and a map of Runnable
# Setting to "FIFO" to use queue like peeking mode which "poll" or "LIFO" to use stack
# like peeking mode which "pop".
task-peeking-mode = "FIFO"
+
+ # This config is new in Pekko v1.1.0 and only has an effect if you are running with JDK 9 and above.
+ # Read the documentation on `java.util.concurrent.ForkJoinPool` to find out more. Default in hex is 0x7fff.
+ maximum-pool-size = 32767
}
# This will be used if you have set "executor = "thread-pool-executor""
mailbox-requirement = ""
}
- # Default separate internal dispatcher to run Akka internal tasks and actors on
+ # Default separate internal dispatcher to run Pekko internal tasks and actors on
# protecting them against starvation because of accidental blocking in user actors (which run on the
# default dispatcher)
internal-dispatcher {
default-mailbox {
# FQCN of the MailboxType. The Class of the FQCN must have a public
# constructor with
- # (akka.actor.ActorSystem.Settings, com.typesafe.config.Config) parameters.
- mailbox-type = "akka.dispatch.UnboundedMailbox"
+ # (org.apache.pekko.actor.ActorSystem.Settings, com.typesafe.config.Config) parameters.
+ mailbox-type = "org.apache.pekko.dispatch.UnboundedMailbox"
# If the mailbox is bounded then it uses this setting to determine its
# capacity. The provided value must be positive.
mailbox {
# Mapping between message queue semantics and mailbox configurations.
- # Used by akka.dispatch.RequiresMessageQueue[T] to enforce different
+ # Used by org.apache.pekko.dispatch.RequiresMessageQueue[T] to enforce different
# mailbox types on actors.
# If your Actor implements RequiresMessageQueue[T], then when you create
# an instance of that actor its mailbox type will be decided by looking
# up a mailbox configuration via T in this mapping
requirements {
- "akka.dispatch.UnboundedMessageQueueSemantics" =
- akka.actor.mailbox.unbounded-queue-based
- "akka.dispatch.BoundedMessageQueueSemantics" =
- akka.actor.mailbox.bounded-queue-based
- "akka.dispatch.DequeBasedMessageQueueSemantics" =
- akka.actor.mailbox.unbounded-deque-based
- "akka.dispatch.UnboundedDequeBasedMessageQueueSemantics" =
- akka.actor.mailbox.unbounded-deque-based
- "akka.dispatch.BoundedDequeBasedMessageQueueSemantics" =
- akka.actor.mailbox.bounded-deque-based
- "akka.dispatch.MultipleConsumerSemantics" =
- akka.actor.mailbox.unbounded-queue-based
- "akka.dispatch.ControlAwareMessageQueueSemantics" =
- akka.actor.mailbox.unbounded-control-aware-queue-based
- "akka.dispatch.UnboundedControlAwareMessageQueueSemantics" =
- akka.actor.mailbox.unbounded-control-aware-queue-based
- "akka.dispatch.BoundedControlAwareMessageQueueSemantics" =
- akka.actor.mailbox.bounded-control-aware-queue-based
- "akka.event.LoggerMessageQueueSemantics" =
- akka.actor.mailbox.logger-queue
+ "org.apache.pekko.dispatch.UnboundedMessageQueueSemantics" =
+ pekko.actor.mailbox.unbounded-queue-based
+ "org.apache.pekko.dispatch.BoundedMessageQueueSemantics" =
+ pekko.actor.mailbox.bounded-queue-based
+ "org.apache.pekko.dispatch.DequeBasedMessageQueueSemantics" =
+ pekko.actor.mailbox.unbounded-deque-based
+ "org.apache.pekko.dispatch.UnboundedDequeBasedMessageQueueSemantics" =
+ pekko.actor.mailbox.unbounded-deque-based
+ "org.apache.pekko.dispatch.BoundedDequeBasedMessageQueueSemantics" =
+ pekko.actor.mailbox.bounded-deque-based
+ "org.apache.pekko.dispatch.MultipleConsumerSemantics" =
+ pekko.actor.mailbox.unbounded-queue-based
+ "org.apache.pekko.dispatch.ControlAwareMessageQueueSemantics" =
+ pekko.actor.mailbox.unbounded-control-aware-queue-based
+ "org.apache.pekko.dispatch.UnboundedControlAwareMessageQueueSemantics" =
+ pekko.actor.mailbox.unbounded-control-aware-queue-based
+ "org.apache.pekko.dispatch.BoundedControlAwareMessageQueueSemantics" =
+ pekko.actor.mailbox.bounded-control-aware-queue-based
+ "org.apache.pekko.event.LoggerMessageQueueSemantics" =
+ pekko.actor.mailbox.logger-queue
}
unbounded-queue-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
- # constructor with (akka.actor.ActorSystem.Settings,
+ # constructor with (org.apache.pekko.actor.ActorSystem.Settings,
# com.typesafe.config.Config) parameters.
- mailbox-type = "akka.dispatch.UnboundedMailbox"
+ mailbox-type = "org.apache.pekko.dispatch.UnboundedMailbox"
}
bounded-queue-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
- # constructor with (akka.actor.ActorSystem.Settings,
+ # constructor with (org.apache.pekko.actor.ActorSystem.Settings,
# com.typesafe.config.Config) parameters.
- mailbox-type = "akka.dispatch.BoundedMailbox"
+ mailbox-type = "org.apache.pekko.dispatch.BoundedMailbox"
}
unbounded-deque-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
- # constructor with (akka.actor.ActorSystem.Settings,
+ # constructor with (org.apache.pekko.actor.ActorSystem.Settings,
# com.typesafe.config.Config) parameters.
- mailbox-type = "akka.dispatch.UnboundedDequeBasedMailbox"
+ mailbox-type = "org.apache.pekko.dispatch.UnboundedDequeBasedMailbox"
}
bounded-deque-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
- # constructor with (akka.actor.ActorSystem.Settings,
+ # constructor with (org.apache.pekko.actor.ActorSystem.Settings,
# com.typesafe.config.Config) parameters.
- mailbox-type = "akka.dispatch.BoundedDequeBasedMailbox"
+ mailbox-type = "org.apache.pekko.dispatch.BoundedDequeBasedMailbox"
}
unbounded-control-aware-queue-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
- # constructor with (akka.actor.ActorSystem.Settings,
+ # constructor with (org.apache.pekko.actor.ActorSystem.Settings,
# com.typesafe.config.Config) parameters.
- mailbox-type = "akka.dispatch.UnboundedControlAwareMailbox"
+ mailbox-type = "org.apache.pekko.dispatch.UnboundedControlAwareMailbox"
}
bounded-control-aware-queue-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
- # constructor with (akka.actor.ActorSystem.Settings,
+ # constructor with (org.apache.pekko.actor.ActorSystem.Settings,
# com.typesafe.config.Config) parameters.
- mailbox-type = "akka.dispatch.BoundedControlAwareMailbox"
+ mailbox-type = "org.apache.pekko.dispatch.BoundedControlAwareMailbox"
}
# The LoggerMailbox will drain all messages in the mailbox
# when the system is shutdown and deliver them to the StandardOutLogger.
# Do not change this unless you know what you are doing.
logger-queue {
- mailbox-type = "akka.event.LoggerMailboxType"
+ mailbox-type = "org.apache.pekko.event.LoggerMailboxType"
}
}
debug {
# enable function of Actor.loggable(), which is to log any received message
- # at DEBUG level, see the “Testing Actor Systems” section of the Akka
- # Documentation at https://akka.io/docs
+ # at DEBUG level, see the “Testing Actor Systems” section of the Pekko
+ # Documentation at https://pekko.apache.org/docs/pekko/current/
receive = off
# enable DEBUG logging of all AutoReceiveMessages (Kill, PoisonPill etc.)
# This setting is a short-cut to
# - using DisabledJavaSerializer instead of JavaSerializer
#
- # Completely disable the use of `akka.serialization.JavaSerialization` by the
- # Akka Serialization extension, instead DisabledJavaSerializer will
+ # Completely disable the use of `org.apache.pekko.serialization.JavaSerialization` by the
+ # Pekko Serialization extension, instead DisabledJavaSerializer will
# be inserted which will fail explicitly if attempts to use java serialization are made.
#
# The log messages emitted by such serializer SHOULD be treated as potential
# Entries for pluggable serializers and their bindings.
serializers {
- java = "akka.serialization.JavaSerializer"
- bytes = "akka.serialization.ByteArraySerializer"
- primitive-long = "akka.serialization.LongSerializer"
- primitive-int = "akka.serialization.IntSerializer"
- primitive-string = "akka.serialization.StringSerializer"
- primitive-bytestring = "akka.serialization.ByteStringSerializer"
- primitive-boolean = "akka.serialization.BooleanSerializer"
+ java = "org.apache.pekko.serialization.JavaSerializer"
+ bytes = "org.apache.pekko.serialization.ByteArraySerializer"
+ primitive-long = "org.apache.pekko.serialization.LongSerializer"
+ primitive-int = "org.apache.pekko.serialization.IntSerializer"
+ primitive-string = "org.apache.pekko.serialization.StringSerializer"
+ primitive-bytestring = "org.apache.pekko.serialization.ByteStringSerializer"
+ primitive-boolean = "org.apache.pekko.serialization.BooleanSerializer"
}
# Class to Serializer binding. You only need to specify the name of an
"java.io.Serializable" = java
"java.lang.String" = primitive-string
- "akka.util.ByteString$ByteString1C" = primitive-bytestring
- "akka.util.ByteString$ByteString1" = primitive-bytestring
- "akka.util.ByteString$ByteStrings" = primitive-bytestring
+ "org.apache.pekko.util.ByteString$ByteString1C" = primitive-bytestring
+ "org.apache.pekko.util.ByteString$ByteString1" = primitive-bytestring
+ "org.apache.pekko.util.ByteString$ByteStrings" = primitive-bytestring
"java.lang.Long" = primitive-long
"scala.Long" = primitive-long
"java.lang.Integer" = primitive-int
# Configuration namespace of serialization identifiers.
# Each serializer implementation must have an entry in the following format:
- # `akka.actor.serialization-identifiers."FQCN" = ID`
+ # `org.apache.pekko.actor.serialization-identifiers."FQCN" = ID`
# where `FQCN` is fully qualified class name of the serializer implementation
# and `ID` is globally unique serializer identifier number.
- # Identifier values from 0 to 40 are reserved for Akka internal usage.
+ # Identifier values from 0 to 40 are reserved for Pekko internal usage.
serialization-identifiers {
- "akka.serialization.JavaSerializer" = 1
- "akka.serialization.ByteArraySerializer" = 4
+ "org.apache.pekko.serialization.JavaSerializer" = 1
+ "org.apache.pekko.serialization.ByteArraySerializer" = 4
primitive-long = 18
primitive-int = 19
"com.google.protobuf.GeneratedMessage",
"com.google.protobuf.GeneratedMessageV3",
"scalapb.GeneratedMessageCompanion",
- "akka.protobuf.GeneratedMessage",
- "akka.protobufv3.internal.GeneratedMessageV3"
+ "org.apache.pekko.protobufv3.internal.GeneratedMessageV3"
]
# Additional classes that are allowed even if they are not defined in `serialization-bindings`.
# It can be exact class name or name of super class or interfaces (one level).
# This is useful when a class is not used for serialization any more and therefore removed
# from `serialization-bindings`, but should still be possible to deserialize.
- allowed-classes = ${akka.serialization.protobuf.whitelist-class}
+ allowed-classes = ${pekko.serialization.protobuf.whitelist-class}
}
# Used to set the behavior of the scheduler.
# Changing the default values may change the system behavior drastically so make
- # sure you know what you're doing! See the Scheduler section of the Akka
+ # sure you know what you're doing! See the Scheduler section of the Pekko
# Documentation for more details.
scheduler {
# The LightArrayRevolverScheduler is used as the default scheduler in the
# This setting selects the timer implementation which shall be loaded at
# system start-up.
- # The class given here must implement the akka.actor.Scheduler interface
+ # The class given here must implement the org.apache.pekko.actor.Scheduler interface
# and offer a public constructor which takes three arguments:
# 1) com.typesafe.config.Config
- # 2) akka.event.LoggingAdapter
+ # 2) org.apache.pekko.event.LoggingAdapter
# 3) java.util.concurrent.ThreadFactory
- implementation = akka.actor.LightArrayRevolverScheduler
+ implementation = org.apache.pekko.actor.LightArrayRevolverScheduler
# When shutting down the scheduler, there will typically be a thread which
# needs to be stopped, and this timeout determines how long to wait for
# Fully qualified config path which holds the dispatcher configuration
# to be used for running the select() calls in the selectors
- selector-dispatcher = "akka.io.pinned-dispatcher"
+ selector-dispatcher = "pekko.io.pinned-dispatcher"
# Fully qualified config path which holds the dispatcher configuration
# for the read/write worker actors
- worker-dispatcher = "akka.actor.internal-dispatcher"
+ worker-dispatcher = "pekko.actor.internal-dispatcher"
# Fully qualified config path which holds the dispatcher configuration
# for the selector management actors
- management-dispatcher = "akka.actor.internal-dispatcher"
+ management-dispatcher = "pekko.actor.internal-dispatcher"
# Fully qualified config path which holds the dispatcher configuration
# on which file IO tasks are scheduled
- file-io-dispatcher = "akka.actor.default-blocking-io-dispatcher"
+ file-io-dispatcher = "pekko.actor.default-blocking-io-dispatcher"
# The maximum number of bytes (or "unlimited") to transfer in one batch
# when using `WriteFile` command which uses `FileChannel.transferTo` to
# Fully qualified config path which holds the dispatcher configuration
# to be used for running the select() calls in the selectors
- selector-dispatcher = "akka.io.pinned-dispatcher"
+ selector-dispatcher = "pekko.io.pinned-dispatcher"
# Fully qualified config path which holds the dispatcher configuration
# for the read/write worker actors
- worker-dispatcher = "akka.actor.internal-dispatcher"
+ worker-dispatcher = "pekko.actor.internal-dispatcher"
# Fully qualified config path which holds the dispatcher configuration
# for the selector management actors
- management-dispatcher = "akka.actor.internal-dispatcher"
+ management-dispatcher = "pekko.actor.internal-dispatcher"
}
udp-connected {
# Fully qualified config path which holds the dispatcher configuration
# to be used for running the select() calls in the selectors
- selector-dispatcher = "akka.io.pinned-dispatcher"
+ selector-dispatcher = "pekko.io.pinned-dispatcher"
# Fully qualified config path which holds the dispatcher configuration
# for the read/write worker actors
- worker-dispatcher = "akka.actor.internal-dispatcher"
+ worker-dispatcher = "pekko.actor.internal-dispatcher"
# Fully qualified config path which holds the dispatcher configuration
# for the selector management actors
- management-dispatcher = "akka.actor.internal-dispatcher"
+ management-dispatcher = "pekko.actor.internal-dispatcher"
}
dns {
# Fully qualified config path which holds the dispatcher configuration
# for the manager and resolver router actors.
- # For actual router configuration see akka.actor.deployment./IO-DNS/*
- dispatcher = "akka.actor.internal-dispatcher"
+ # For actual router configuration see pekko.actor.deployment./IO-DNS/*
+ dispatcher = "pekko.actor.internal-dispatcher"
- # Name of the subconfig at path akka.io.dns, see inet-address below
+ # Name of the subconfig at path pekko.io.dns, see inet-address below
#
# Change to `async-dns` to use the new "native" DNS resolver,
# which is also capable of resolving SRV records.
resolver = "inet-address"
# To-be-deprecated DNS resolver implementation which uses the Java InetAddress to resolve DNS records.
- # To be replaced by `akka.io.dns.async` which implements the DNS protocol natively and without blocking (which InetAddress does)
+ # To be replaced by `pekko.io.dns.async` which implements the DNS protocol natively and without blocking (which InetAddress does)
inet-address {
- # Must implement akka.io.DnsProvider
- provider-object = "akka.io.InetAddressDnsProvider"
+ # Must implement org.apache.pekko.io.DnsProvider
+ provider-object = "org.apache.pekko.io.InetAddressDnsProvider"
# To set the time to cache name resolutions
# Possible values:
}
async-dns {
- provider-object = "akka.io.dns.internal.AsyncDnsProvider"
+ provider-object = "org.apache.pekko.io.dns.internal.AsyncDnsProvider"
# Set upper bound for caching successfully resolved dns entries
# if the DNS record has a smaller TTL value than the setting that
# Defaults to a system dependent lookup (on Unix like OSes, will attempt to parse /etc/resolv.conf, on
# other platforms, will default to 1).
ndots = default
+
+ # The policy used to generate dns transaction ids. Options are `thread-local-random`,
+ # `enhanced-double-hash-random` or `secure-random`. Defaults to `enhanced-double-hash-random` which uses an
+ # enhanced double hashing algorithm optimized for minimizing collisions with a FIPS compliant initial seed.
+ # `thread-local-random` is similar to Netty and `secure-random` produces FIPS compliant random numbers every
+ # time but could block looking for entropy (these are short integers so are easy to brute-force, use
+ # `enhanced-double-hash-random` unless you really require FIPS compliant random numbers).
+ id-generator-policy = enhanced-double-hash-random
}
}
}
# Run the coordinated shutdown when the JVM process exits, e.g.
# via kill SIGTERM signal (SIGINT ctrl-c doesn't work).
- # This property is related to `akka.jvm-shutdown-hooks` above.
+ # This property is related to `pekko.jvm-shutdown-hooks` above.
run-by-jvm-shutdown-hook = on
# Run the coordinated shutdown when ActorSystem.terminate is called.
# Overrides are applied using the `reason.getClass.getName`.
# Overrides the `exit-code` when the `Reason` is a cluster
# Downing or a Cluster Join Unsuccessful event
- "akka.actor.CoordinatedShutdown$ClusterDowningReason$" {
+ "org.apache.pekko.actor.CoordinatedShutdown$ClusterDowningReason$" {
exit-code = -1
}
- "akka.actor.CoordinatedShutdown$ClusterJoinUnsuccessfulReason$" {
+ "org.apache.pekko.actor.CoordinatedShutdown$ClusterJoinUnsuccessfulReason$" {
exit-code = -1
}
}
#//#coordinated-shutdown-phases
# CoordinatedShutdown is enabled by default and will run the tasks that
- # are added to these phases by individual Akka modules and user logic.
+ # are added to these phases by individual Pekko modules and user logic.
#
# The phases are ordered as a DAG by defining the dependencies between the phases
# to make sure shutdown tasks are run in the right order.
# identify or look up the circuit breaker.
# Note: Circuit breakers created without ids are not affected by this configuration.
# A child configuration section with the same name as the circuit breaker identifier
- # will be used, with fallback to the `akka.circuit-breaker.default` section.
+ # will be used, with fallback to the `pekko.circuit-breaker.default` section.
circuit-breaker {
# Default configuration that is used if a configuration section
# In order to skip this additional delay set as 0
random-factor = 0.0
- # A allowlist of fqcn of Exceptions that the CircuitBreaker
+ # A allow-list of fqcn of Exceptions that the CircuitBreaker
# should not consider failures. By default all exceptions are
# considered failures.
exception-allowlist = []
-akka.actor.typed {
+# SPDX-License-Identifier: Apache-2.0
- # List FQCN of `akka.actor.typed.ExtensionId`s which shall be loaded at actor system startup.
+pekko.actor.typed {
+
+ # List FQCN of `org.apache.pekko.actor.typed.ExtensionId`s which shall be loaded at actor system startup.
# Should be on the format: 'extensions = ["com.example.MyExtId1", "com.example.MyExtId2"]' etc.
- # See the Akka Documentation for more info about Extensions
+ # See the Pekko Documentation for more info about Extensions
extensions = []
# List FQCN of extensions which shall be loaded at actor system startup.
#
# Should not be set by end user applications in 'application.conf', use the extensions property for that
#
- library-extensions = ${?akka.actor.typed.library-extensions} []
+ library-extensions = ${?pekko.actor.typed.library-extensions} []
# Receptionist is started eagerly to allow clustered receptionist to gather remote registrations early on.
- library-extensions += "akka.actor.typed.receptionist.Receptionist$"
+ library-extensions += "org.apache.pekko.actor.typed.receptionist.Receptionist$"
# While an actor is restarted (waiting for backoff to expire and children to stop)
# incoming messages and signals are stashed, and delivered later to the newly restarted
# buffer. If the capacity is exceed then additional incoming messages are dropped.
restart-stash-capacity = 1000
- # Typed mailbox defaults to the single consumber mailbox as balancing dispatcher is not supported
+ # Typed mailbox defaults to the single consumer mailbox as balancing dispatcher is not supported
default-mailbox {
- mailbox-type = "akka.dispatch.SingleConsumerOnlyUnboundedMailbox"
+ mailbox-type = "org.apache.pekko.dispatch.SingleConsumerOnlyUnboundedMailbox"
}
}
# Load typed extensions by a classic extension.
-akka.library-extensions += "akka.actor.typed.internal.adapter.ActorSystemAdapter$LoadTypedExtensions"
+pekko.library-extensions += "org.apache.pekko.actor.typed.internal.adapter.ActorSystemAdapter$LoadTypedExtensions"
-akka.actor {
+pekko.actor {
serializers {
- typed-misc = "akka.actor.typed.internal.MiscMessageSerializer"
- service-key = "akka.actor.typed.internal.receptionist.ServiceKeySerializer"
+ typed-misc = "org.apache.pekko.actor.typed.internal.MiscMessageSerializer"
+ service-key = "org.apache.pekko.actor.typed.internal.receptionist.ServiceKeySerializer"
}
serialization-identifiers {
- "akka.actor.typed.internal.MiscMessageSerializer" = 24
- "akka.actor.typed.internal.receptionist.ServiceKeySerializer" = 26
+ "org.apache.pekko.actor.typed.internal.MiscMessageSerializer" = 24
+ "org.apache.pekko.actor.typed.internal.receptionist.ServiceKeySerializer" = 26
}
serialization-bindings {
- "akka.actor.typed.ActorRef" = typed-misc
- "akka.actor.typed.internal.adapter.ActorRefAdapter" = typed-misc
- "akka.actor.typed.internal.receptionist.DefaultServiceKey" = service-key
+ "org.apache.pekko.actor.typed.ActorRef" = typed-misc
+ "org.apache.pekko.actor.typed.internal.adapter.ActorRefAdapter" = typed-misc
+ "org.apache.pekko.actor.typed.internal.receptionist.DefaultServiceKey" = service-key
}
}
-# When using Akka Typed (having akka-actor-typed in classpath) the
-# akka.event.slf4j.Slf4jLogger is enabled instead of the DefaultLogger
-# even though it has not been explicitly defined in `akka.loggers`
+# When using Pekko Typed (having pekko-actor-typed in classpath) the
+# org.apache.pekko.event.slf4j.Slf4jLogger is enabled instead of the DefaultLogger
+# even though it has not been explicitly defined in `pekko.loggers`
# configuration.
#
-# Slf4jLogger will be used for all Akka classic logging via eventStream,
-# including logging from Akka internals. The Slf4jLogger is then using
+# Slf4jLogger will be used for all Pekko classic logging via eventStream,
+# including logging from Pekko internals. The Slf4jLogger is then using
# an ordinary org.slf4j.Logger to emit the log events.
#
# The Slf4jLoggingFilter is also enabled automatically.
#
# This behavior can be disabled by setting this property to `off`.
-akka.use-slf4j = on
+pekko.use-slf4j = on
-akka.reliable-delivery {
+pekko.reliable-delivery {
producer-controller {
# To avoid head of line blocking from serialization and transfer
# of large messages this can be enabled.
# Large messages are chunked into pieces of the given size in bytes. The
- # chunked messages are sent separatetely and assembled on the consumer side.
+ # chunked messages are sent separately and assembled on the consumer side.
# Serialization and deserialization is performed by the ProducerController and
# ConsumerController respectively instead of in the remote transport layer.
chunk-large-messages = off
}
work-pulling {
- producer-controller = ${akka.reliable-delivery.producer-controller}
+ producer-controller = ${pekko.reliable-delivery.producer-controller}
producer-controller {
# Limit of how many messages that can be buffered when there
# is no demand from the consumer side.
internal-ask-timeout = 60s
# Chunked messages not implemented for work-pulling yet. Override to not
- # propagate property from akka.reliable-delivery.producer-controller.
+ # propagate property from pekko.reliable-delivery.producer-controller.
chunk-large-messages = off
}
}
-######################################
-# Akka Cluster Reference Config File #
-######################################
+# SPDX-License-Identifier: Apache-2.0
+
+#######################################
+# Pekko Cluster Reference Config File #
+#######################################
# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your application.conf.
-akka {
+pekko {
cluster {
# Initial contact points of the cluster.
# The nodes to join automatically at startup.
# Comma separated full URIs defined by a string on the form of
- # "akka://system@hostname:port"
+ # "pekko://system@hostname:port"
# Leave as empty if the node is supposed to be joined manually.
seed-nodes = []
# When this is the first seed node and there is no positive reply from the other
# seed nodes within this timeout it will join itself to bootstrap the cluster.
# When this is not the first seed node the join attempts will be performed with
- # this interval.
+ # this interval.
seed-node-timeout = 5s
# If a join request fails it will be retried after this period.
# Disable join retry by specifying "off".
retry-unsuccessful-join-after = 10s
-
+
# The joining of given seed nodes will by default be retried indefinitely until
# a successful join. That process can be aborted if unsuccessful by defining this
# timeout. When aborted it will run CoordinatedShutdown, which by default will
# terminate the ActorSystem. CoordinatedShutdown can also be configured to exit
# the JVM. It is useful to define this timeout if the seed-nodes are assembled
# dynamically and a restart with new seed-nodes should be tried after unsuccessful
- # attempts.
+ # attempts.
shutdown-after-unsuccessful-join-seed-nodes = off
# Time margin after which shards or singletons that belonged to a downed/removed
# e.g. by keeping the larger side of the partition and shutting down the smaller side.
# Disable with "off" or specify a duration to enable.
#
- # When using the `akka.cluster.sbr.SplitBrainResolver` as downing provider it will use
- # the akka.cluster.split-brain-resolver.stable-after as the default down-removal-margin
+ # When using the `org.apache.pekko.cluster.sbr.SplitBrainResolver` as downing provider it will use
+ # the org.apache.pekko.cluster.split-brain-resolver.stable-after as the default down-removal-margin
# if this down-removal-margin is undefined.
down-removal-margin = off
# If this setting is left empty the `NoDowning` provider is used and no automatic downing will be performed.
#
# If specified the value must be the fully qualified class name of a subclass of
- # `akka.cluster.DowningProvider` having a public one argument constructor accepting an `ActorSystem`
+ # `org.apache.pekko.cluster.DowningProvider` having a public one argument constructor accepting an `ActorSystem`
downing-provider-class = ""
# Artery only setting
# special role assigned from the data-center a node belongs to (see the
# multi-data-center section below)
roles = []
-
+
# Run the coordinated shutdown from phase 'cluster-shutdown' when the cluster
# is shutdown for other reasons than when leaving, e.g. when downing. This
# will terminate the ActorSystem when the cluster extension is shutdown.
#
# It has support for https://github.com/dwijnand/sbt-dynver format with `+` or
# `-` separator. The number of commits from the tag is handled as a numeric part.
- # For example `1.0.0+3-73475dce26` is less than `1.0.10+10-ed316bd024` (3 < 10).
+ # For example `1.0.0+3-73475dce26` is less than `1.0.0+10-ed316bd024` (3 < 10).
app-version = "0.0.0"
# Minimum required number of members before the leader changes member status
min-nr-of-members = 1
# Enable/disable info level logging of cluster events.
- # These are logged with logger name `akka.cluster.Cluster`.
+ # These are logged with logger name `org.apache.pekko.cluster.Cluster`.
log-info = on
# Enable/disable verbose info-level logging of cluster events
# for temporary troubleshooting. Defaults to 'off'.
- # These are logged with logger name `akka.cluster.Cluster`.
+ # These are logged with logger name `org.apache.pekko.cluster.Cluster`.
log-info-verbose = off
# Enable or disable JMX MBeans for management of the cluster
jmx.enabled = on
# Enable or disable multiple JMX MBeans in the same JVM
- # If this is disabled, the MBean Object name is "akka:type=Cluster"
- # If this is enabled, them MBean Object names become "akka:type=Cluster,port=$clusterPortNumber"
+ # If this is disabled, the MBean Object name is "pekko:type=Cluster"
+ # If this is enabled, them MBean Object names become "pekko:type=Cluster,port=$clusterPortNumber"
jmx.multi-mbeans-in-same-jvm = off
# how long should the node wait before starting the periodic tasks
# how often should the node send out gossip information?
gossip-interval = 1s
-
+
# discard incoming gossip messages if not handled within this duration
gossip-time-to-live = 2s
# The id of the dispatcher to use for cluster actors.
# If specified you need to define the settings of the actual dispatcher.
- use-dispatcher = "akka.actor.internal-dispatcher"
+ use-dispatcher = "pekko.actor.internal-dispatcher"
# Gossip to random node with newer or older state information, if any with
# this probability. Otherwise Gossip to any random live node.
# Probability value is between 0.0 and 1.0. 0.0 means never, 1.0 means always.
gossip-different-view-probability = 0.8
-
+
# Reduced the above probability when the number of nodes in the cluster
# greater than this value.
reduce-gossip-different-view-probability = 400
failure-detector {
# FQCN of the failure detector implementation.
- # It must implement akka.remote.FailureDetector and have
+ # It must implement org.apache.pekko.remote.FailureDetector and have
# a public constructor with a com.typesafe.config.Config and
- # akka.actor.EventStream parameter.
- implementation-class = "akka.remote.PhiAccrualFailureDetector"
+ # org.apache.pekko.actor.EventStream parameter.
+ implementation-class = "org.apache.pekko.remote.PhiAccrualFailureDetector"
# How often keep-alive heartbeat messages should be sent to each connection.
heartbeat-interval = 1 s
# Number of member nodes that each member will send heartbeat messages to,
# i.e. each node will be monitored by this number of other nodes.
monitored-by-nr-of-members = 9
-
+
# After the heartbeat request has been sent the first failure detection
# will start after this period, even though no heartbeat message has
# been received.
# Configures multi-dc specific heartbeating and other mechanisms,
# many of them have a direct counter-part in "one datacenter mode",
# in which case these settings would not be used at all - they only apply,
- # if your cluster nodes are configured with at-least 2 different `akka.cluster.data-center` values.
+ # if your cluster nodes are configured with at-least 2 different `pekko.cluster.data-center` values.
multi-data-center {
# Defines which data center this node belongs to. It is typically used to make islands of the
- # cluster that are colocated. This can be used to make the cluster aware that it is running
+ # cluster that are co-located. This can be used to make the cluster aware that it is running
# across multiple availability zones or regions. It can also be used for other logical
# grouping of nodes.
self-data-center = "default"
failure-detector {
# FQCN of the failure detector implementation.
- # It must implement akka.remote.FailureDetector and have
+ # It must implement org.apache.pekko.remote.FailureDetector and have
# a public constructor with a com.typesafe.config.Config and
- # akka.actor.EventStream parameter.
- implementation-class = "akka.remote.DeadlineFailureDetector"
-
+ # org.apache.pekko.actor.EventStream parameter.
+ implementation-class = "org.apache.pekko.remote.DeadlineFailureDetector"
+
# Number of potentially lost/delayed heartbeats that will be
# accepted before considering it to be an anomaly.
# This margin is important to be able to survive sudden, occasional,
# pauses in heartbeat arrivals, due to for example garbage collect or
# network drop.
acceptable-heartbeat-pause = 10 s
-
+
# How often keep-alive heartbeat messages should be sent to each connection.
heartbeat-interval = 3 s
-
+
# After the heartbeat request has been sent the first failure detection
# will start after this period, even though no heartbeat message has
# been received.
# If the tick-duration of the default scheduler is longer than the
# tick-duration configured here a dedicated scheduler will be used for
# periodic tasks of the cluster, otherwise the default scheduler is used.
- # See akka.scheduler settings for more details.
+ # See pekko.scheduler settings for more details.
scheduler {
tick-duration = 33ms
ticks-per-wheel = 512
debug {
# Log heartbeat events (very verbose, useful mostly when debugging heartbeating issues).
- # These are logged with logger name `akka.cluster.ClusterHeartbeat`.
+ # These are logged with logger name `org.apache.pekko.cluster.ClusterHeartbeat`.
verbose-heartbeat-logging = off
# log verbose details about gossip
# Checkers defined in reference.conf can be disabled by application by using empty string value
# for the named entry.
checkers {
- akka-cluster = "akka.cluster.JoinConfigCompatCheckCluster"
+ pekko-cluster = "org.apache.pekko.cluster.JoinConfigCompatCheckCluster"
}
# Some configuration properties might not be appropriate to transfer between nodes
# All properties starting with the paths defined here are excluded, i.e. you can add the path of a whole
# section here to skip everything inside that section.
sensitive-config-paths {
- akka = [
+ pekko = [
"user.home", "user.name", "user.dir",
"socksNonProxyHosts", "http.nonProxyHosts", "ftp.nonProxyHosts",
- "akka.remote.secure-cookie",
- "akka.remote.classic.netty.ssl.security",
+ "pekko.remote.secure-cookie",
+ "pekko.remote.classic.netty.ssl.security",
# Pre 2.6 path, keep around to avoid sending things misconfigured with old paths
- "akka.remote.netty.ssl.security",
- "akka.remote.artery.ssl"
+ "pekko.remote.netty.ssl.security",
+ "pekko.remote.artery.ssl"
]
}
# it will deploy 2 routees per new member in the cluster, up to
# 25 members.
max-nr-of-instances-per-node = 1
-
+
# Maximum number of routees that will be deployed, in total
# on all nodes. See also description of max-nr-of-instances-per-node.
# For backwards compatibility reasons, nr-of-instances
# has the same purpose as max-total-nr-of-instances for cluster
# aware routers and nr-of-instances (if defined by user) takes
- # precedence over max-total-nr-of-instances.
+ # precedence over max-total-nr-of-instances.
max-total-nr-of-instances = 10000
# Defines if routees are allowed to be located on the same node as
# Protobuf serializer for cluster messages
actor {
serializers {
- akka-cluster = "akka.cluster.protobuf.ClusterMessageSerializer"
+ pekko-cluster = "org.apache.pekko.cluster.protobuf.ClusterMessageSerializer"
}
serialization-bindings {
- "akka.cluster.ClusterMessage" = akka-cluster
- "akka.cluster.routing.ClusterRouterPool" = akka-cluster
+ "org.apache.pekko.cluster.ClusterMessage" = pekko-cluster
+ "org.apache.pekko.cluster.routing.ClusterRouterPool" = pekko-cluster
}
-
+
serialization-identifiers {
- "akka.cluster.protobuf.ClusterMessageSerializer" = 5
+ "org.apache.pekko.cluster.protobuf.ClusterMessageSerializer" = 5
}
-
+
}
}
#//#split-brain-resolver
# To enable the split brain resolver you first need to enable the provider in your application.conf:
-# akka.cluster.downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
+# pekko.cluster.downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
-akka.cluster.split-brain-resolver {
+pekko.cluster.split-brain-resolver {
# Select one of the available strategies (see descriptions below):
# static-quorum, keep-majority, keep-oldest, down-all, lease-majority
active-strategy = keep-majority
# consists of 3 nodes each, i.e. each side thinks it has enough nodes to continue by
# itself. A warning is logged if this recommendation is violated.
#//#static-quorum
-akka.cluster.split-brain-resolver.static-quorum {
+pekko.cluster.split-brain-resolver.static-quorum {
# minimum number of nodes that the cluster must have
quorum-size = undefined
# Note that if there are more than two partitions and none is in majority each part
# will shutdown itself, terminating the whole cluster.
#//#keep-majority
-akka.cluster.split-brain-resolver.keep-majority {
+pekko.cluster.split-brain-resolver.keep-majority {
# if the 'role' is defined the decision is based only on members with that 'role'
role = ""
}
# when 'down-if-alone' is 'on', otherwise they will down themselves if the
# oldest node crashes, i.e. shutdown the whole cluster together with the oldest node.
#//#keep-oldest
-akka.cluster.split-brain-resolver.keep-oldest {
+pekko.cluster.split-brain-resolver.keep-oldest {
# Enable downing of the oldest node when it is partitioned from all other nodes
down-if-alone = on
# This is achieved by adding a delay before trying to acquire the lease on the
# minority side.
#//#lease-majority
-akka.cluster.split-brain-resolver.lease-majority {
+pekko.cluster.split-brain-resolver.lease-majority {
lease-implementation = ""
- # The recommended format for the lease name is "<service-name>-akka-sbr".
- # When lease-name is not defined, the name will be set to "<actor-system-name>-akka-sbr"
+ # The recommended format for the lease name is "<service-name>-pekko-sbr".
+ # When lease-name is not defined, the name will be set to "<actor-system-name>-pekko-sbr"
lease-name = ""
# This delay is used on the minority side before trying to acquire the lease,
-############################################
-# Akka Cluster Tools Reference Config File #
-############################################
+# SPDX-License-Identifier: Apache-2.0
+
+#############################################
+# Pekko Cluster Tools Reference Config File #
+#############################################
# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your application.conf.
# //#pub-sub-ext-config
# Settings for the DistributedPubSub extension
-akka.cluster.pub-sub {
+pekko.cluster.pub-sub {
# Actor name of the mediator actor, /system/distributedPubSubMediator
name = distributedPubSubMediator
# The id of the dispatcher to use for DistributedPubSubMediator actors.
# If specified you need to define the settings of the actual dispatcher.
- use-dispatcher = "akka.actor.internal-dispatcher"
+ use-dispatcher = "pekko.actor.internal-dispatcher"
}
# //#pub-sub-ext-config
# Protobuf serializer for cluster DistributedPubSubMeditor messages
-akka.actor {
+pekko.actor {
serializers {
- akka-pubsub = "akka.cluster.pubsub.protobuf.DistributedPubSubMessageSerializer"
+ pekko-pubsub = "org.apache.pekko.cluster.pubsub.protobuf.DistributedPubSubMessageSerializer"
}
serialization-bindings {
- "akka.cluster.pubsub.DistributedPubSubMessage" = akka-pubsub
- "akka.cluster.pubsub.DistributedPubSubMediator$Internal$SendToOneSubscriber" = akka-pubsub
+ "org.apache.pekko.cluster.pubsub.DistributedPubSubMessage" = pekko-pubsub
+ "org.apache.pekko.cluster.pubsub.DistributedPubSubMediator$Internal$SendToOneSubscriber" = pekko-pubsub
}
serialization-identifiers {
- "akka.cluster.pubsub.protobuf.DistributedPubSubMessageSerializer" = 9
+ "org.apache.pekko.cluster.pubsub.protobuf.DistributedPubSubMessageSerializer" = 9
}
}
# //#receptionist-ext-config
# Settings for the ClusterClientReceptionist extension
-akka.cluster.client.receptionist {
+pekko.cluster.client.receptionist {
# Actor name of the ClusterReceptionist actor, /system/receptionist
name = receptionist
# The id of the dispatcher to use for ClusterReceptionist actors.
# If specified you need to define the settings of the actual dispatcher.
- use-dispatcher = "akka.actor.internal-dispatcher"
+ use-dispatcher = "pekko.actor.internal-dispatcher"
# How often failure detection heartbeat messages should be received for
# each ClusterClient
# Number of potentially lost/delayed heartbeats that will be
# accepted before considering it to be an anomaly.
- # The ClusterReceptionist is using the akka.remote.DeadlineFailureDetector, which
+ # The ClusterReceptionist is using the org.apache.pekko.remote.DeadlineFailureDetector, which
# will trigger if there are no heartbeats within the duration
# heartbeat-interval + acceptable-heartbeat-pause, i.e. 15 seconds with
# the default settings.
# //#cluster-client-config
# Settings for the ClusterClient
-akka.cluster.client {
+pekko.cluster.client {
# Actor paths of the ClusterReceptionist actors on the servers (cluster nodes)
# that the client will try to contact initially. It is mandatory to specify
# at least one initial contact.
# Comma separated full actor paths defined by a string on the form of
- # "akka://system@hostname:port/system/receptionist"
+ # "pekko://system@hostname:port/system/receptionist"
initial-contacts = []
# Interval at which the client retries to establish contact with one of
# Number of potentially lost/delayed heartbeats that will be
# accepted before considering it to be an anomaly.
- # The ClusterClient is using the akka.remote.DeadlineFailureDetector, which
+ # The ClusterClient is using the org.apache.pekko.remote.DeadlineFailureDetector, which
# will trigger if there are no heartbeats within the duration
# heartbeat-interval + acceptable-heartbeat-pause, i.e. 15 seconds with
# the default settings.
# Maximum allowed buffer size is 10000.
buffer-size = 1000
- # If connection to the receiptionist is lost and the client has not been
+ # If connection to the receptionist is lost and the client has not been
# able to acquire a new connection for this long the client will stop itself.
# This duration makes it possible to watch the cluster client and react on a more permanent
# loss of connection with the cluster, for example by accessing some kind of
# //#cluster-client-config
# Protobuf serializer for ClusterClient messages
-akka.actor {
+pekko.actor {
serializers {
- akka-cluster-client = "akka.cluster.client.protobuf.ClusterClientMessageSerializer"
+ pekko-cluster-client = "org.apache.pekko.cluster.client.protobuf.ClusterClientMessageSerializer"
}
serialization-bindings {
- "akka.cluster.client.ClusterClientMessage" = akka-cluster-client
+ "org.apache.pekko.cluster.client.ClusterClientMessage" = pekko-cluster-client
}
serialization-identifiers {
- "akka.cluster.client.protobuf.ClusterClientMessageSerializer" = 15
+ "org.apache.pekko.cluster.client.protobuf.ClusterClientMessageSerializer" = 15
}
}
# //#singleton-config
-akka.cluster.singleton {
+pekko.cluster.singleton {
# The actor name of the child singleton actor.
singleton-name = "singleton"
# When a node is becoming oldest it sends hand-over request to previous oldest,
# that might be leaving the cluster. This is retried with this interval until
# the previous oldest confirms that the hand over has started or the previous
- # oldest member is removed from the cluster (+ akka.cluster.down-removal-margin).
+ # oldest member is removed from the cluster (+ pekko.cluster.down-removal-margin).
hand-over-retry-interval = 1s
# The number of retries are derived from hand-over-retry-interval and
- # akka.cluster.down-removal-margin (or ClusterSingletonManagerSettings.removalMargin),
+ # pekko.cluster.down-removal-margin (or ClusterSingletonManagerSettings.removalMargin),
# but it will never be less than this property.
# After the hand over retries and it's still not able to exchange the hand over messages
# with the previous oldest it will restart itself by throwing ClusterSingletonManagerIsStuck,
# //#singleton-config
# //#singleton-proxy-config
-akka.cluster.singleton-proxy {
+pekko.cluster.singleton-proxy {
# The actor name of the singleton actor that is started by the ClusterSingletonManager
- singleton-name = ${akka.cluster.singleton.singleton-name}
+ singleton-name = ${pekko.cluster.singleton.singleton-name}
# The role of the cluster nodes where the singleton can be deployed.
# Corresponding to the role used by the `ClusterSingletonManager`. If the role is not
# //#singleton-proxy-config
# Serializer for cluster ClusterSingleton messages
-akka.actor {
+pekko.actor {
serializers {
- akka-singleton = "akka.cluster.singleton.protobuf.ClusterSingletonMessageSerializer"
+ pekko-singleton = "org.apache.pekko.cluster.singleton.protobuf.ClusterSingletonMessageSerializer"
}
serialization-bindings {
- "akka.cluster.singleton.ClusterSingletonMessage" = akka-singleton
+ "org.apache.pekko.cluster.singleton.ClusterSingletonMessage" = pekko-singleton
}
serialization-identifiers {
- "akka.cluster.singleton.protobuf.ClusterSingletonMessageSerializer" = 14
+ "org.apache.pekko.cluster.singleton.protobuf.ClusterSingletonMessageSerializer" = 14
}
}
-############################################
-# Akka Cluster Typed Reference Config File #
-############################################
+# SPDX-License-Identifier: Apache-2.0
+
+#############################################
+# Pekko Cluster Typed Reference Config File #
+#############################################
# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your application.conf.
-akka.cluster.typed.receptionist {
+pekko.cluster.typed.receptionist {
# Updates with Distributed Data are done with this consistency level.
# Possible values: local, majority, all, 2, 3, 4 (n)
write-consistency = local
distributed-key-count = 5
# Settings for the Distributed Data replicator used by Receptionist.
- # Same layout as akka.cluster.distributed-data.
- distributed-data = ${akka.cluster.distributed-data}
+ # Same layout as pekko.cluster.distributed-data.
+ distributed-data = ${pekko.cluster.distributed-data}
# make sure that by default it's for all roles (Play loads config in different way)
distributed-data.role = ""
}
-akka.cluster.ddata.typed {
+pekko.cluster.ddata.typed {
# The timeout to use for ask operations in ReplicatorMessageAdapter.
# This should be longer than the timeout given in Replicator.WriteConsistency and
# Replicator.ReadConsistency. The replicator will always send a reply within those
replicator-message-adapter-unexpected-ask-timeout = 20 s
}
-akka {
+pekko {
actor {
serialization-identifiers {
- "akka.cluster.typed.internal.AkkaClusterTypedSerializer" = 28
- "akka.cluster.typed.internal.delivery.ReliableDeliverySerializer" = 36
+ "org.apache.pekko.cluster.typed.internal.PekkoClusterTypedSerializer" = 28
+ "org.apache.pekko.cluster.typed.internal.delivery.ReliableDeliverySerializer" = 36
}
serializers {
- typed-cluster = "akka.cluster.typed.internal.AkkaClusterTypedSerializer"
- reliable-delivery = "akka.cluster.typed.internal.delivery.ReliableDeliverySerializer"
+ typed-cluster = "org.apache.pekko.cluster.typed.internal.PekkoClusterTypedSerializer"
+ reliable-delivery = "org.apache.pekko.cluster.typed.internal.delivery.ReliableDeliverySerializer"
}
serialization-bindings {
- "akka.cluster.typed.internal.receptionist.ClusterReceptionist$Entry" = typed-cluster
- "akka.actor.typed.internal.pubsub.TopicImpl$MessagePublished" = typed-cluster
- "akka.actor.typed.delivery.internal.DeliverySerializable" = reliable-delivery
+ "org.apache.pekko.cluster.typed.internal.receptionist.ClusterReceptionist$Entry" = typed-cluster
+ "org.apache.pekko.actor.typed.internal.pubsub.TopicImpl$MessagePublished" = typed-cluster
+ "org.apache.pekko.actor.typed.delivery.internal.DeliverySerializable" = reliable-delivery
}
}
cluster.configuration-compatibility-check.checkers {
- receptionist = "akka.cluster.typed.internal.receptionist.ClusterReceptionistConfigCompatChecker"
+ receptionist = "org.apache.pekko.cluster.typed.internal.receptionist.ClusterReceptionistConfigCompatChecker"
}
}
-##############################################
-# Akka Distributed DataReference Config File #
-##############################################
+# SPDX-License-Identifier: Apache-2.0
+
+###############################################
+# Pekko Distributed DataReference Config File #
+###############################################
# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your application.conf.
#//#distributed-data
# Settings for the DistributedData extension
-akka.cluster.distributed-data {
+pekko.cluster.distributed-data {
# Actor name of the Replicator actor, /system/ddataReplicator
name = ddataReplicator
# How often the Replicator should send out gossip information
gossip-interval = 2 s
-
+
# How often the subscribers will be notified of changes, if any
notify-subscribers-interval = 500 ms
# The actual number of data entries in each Gossip message is dynamically
# adjusted to not exceed the maximum remote message size (maximum-frame-size).
max-delta-elements = 500
-
+
# The id of the dispatcher to use for Replicator actors.
# If specified you need to define the settings of the actual dispatcher.
- use-dispatcher = "akka.actor.internal-dispatcher"
+ use-dispatcher = "pekko.actor.internal-dispatcher"
# How often the Replicator checks for pruning of data associated with
# removed cluster nodes. If this is set to 'off' the pruning feature will
# be completely disabled.
pruning-interval = 120 s
-
+
# How long time it takes to spread the data to all other replica nodes.
# This is used when initiating and completing the pruning process of data associated
- # with removed cluster nodes. The time measurement is stopped when any replica is
+ # with removed cluster nodes. The time measurement is stopped when any replica is
# unreachable, but it's still recommended to configure this with certain margin.
# It should be in the magnitude of minutes even though typical dissemination time
- # is shorter (grows logarithmic with number of nodes). There is no advantage of
+ # is shorter (grows logarithmic with number of nodes). There is no advantage of
# setting this too low. Setting it to large value will delay the pruning process.
max-pruning-dissemination = 300 s
-
+
# The markers of that pruning has been performed for a removed node are kept for this
# time and thereafter removed. If and old data entry that was never pruned is somehow
# injected and merged with existing data after this time the value will not be correct.
# This would be possible (although unlikely) in the case of a long network partition.
- # It should be in the magnitude of hours. For durable data it is configured by
- # 'akka.cluster.distributed-data.durable.pruning-marker-time-to-live'.
+ # It should be in the magnitude of hours. For durable data it is configured by
+ # 'pekko.cluster.distributed-data.durable.pruning-marker-time-to-live'.
pruning-marker-time-to-live = 6 h
-
- # Serialized Write and Read messages are cached when they are sent to
+
+ # Serialized Write and Read messages are cached when they are sent to
# several nodes. If no further activity they are removed from the cache
# after this duration.
serializer-cache-time-to-live = 10s
# Update and Get operations are sent to oldest nodes first.
# This is useful together with Cluster Singleton, which is running on oldest nodes.
prefer-oldest = off
-
+
# Settings for delta-CRDT
delta-crdt {
# enable or disable delta-CRDT replication
enabled = on
-
+
# Some complex deltas grow in size for each update and above this
# threshold such deltas are discarded and sent as full state instead.
# This is number of elements or similar size hint, not size in bytes.
max-delta-size = 50
}
-
+
durable {
# List of keys that are durable. Prefix matching is supported by using * at the
- # end of a key.
+ # end of a key.
keys = []
-
+
# The markers of that pruning has been performed for a removed node are kept for this
# time and thereafter removed. If and old data entry that was never pruned is
# injected and merged with existing data after this time the value will not be correct.
# This would be possible if replica with durable data didn't participate in the pruning
- # (e.g. it was shutdown) and later started after this time. A durable replica should not
+ # (e.g. it was shutdown) and later started after this time. A durable replica should not
# be stopped for longer time than this duration and if it is joining again after this
# duration its data should first be manually removed (from the lmdb directory).
# It should be in the magnitude of days. Note that there is a corresponding setting
- # for non-durable data: 'akka.cluster.distributed-data.pruning-marker-time-to-live'.
+ # for non-durable data: 'pekko.cluster.distributed-data.pruning-marker-time-to-live'.
pruning-marker-time-to-live = 10 d
-
+
# Fully qualified class name of the durable store actor. It must be a subclass
- # of akka.actor.Actor and handle the protocol defined in
- # akka.cluster.ddata.DurableStore. The class must have a constructor with
+ # of pekko.actor.Actor and handle the protocol defined in
+ # org.apache.pekko.cluster.ddata.DurableStore. The class must have a constructor with
# com.typesafe.config.Config parameter.
- store-actor-class = akka.cluster.ddata.LmdbDurableStore
-
- use-dispatcher = akka.cluster.distributed-data.durable.pinned-store
-
+ store-actor-class = org.apache.pekko.cluster.ddata.LmdbDurableStore
+
+ use-dispatcher = pekko.cluster.distributed-data.durable.pinned-store
+
pinned-store {
executor = thread-pool-executor
type = PinnedDispatcher
}
-
+
# Config for the LmdbDurableStore
lmdb {
# Directory of LMDB file. There are two options:
#
# When running in production you may want to configure this to a specific
# path (alt 2), since the default directory contains the remote port of the
- # actor system to make the name unique. If using a dynamically assigned
- # port (0) it will be different each time and the previously stored data
+ # actor system to make the name unique. If using a dynamically assigned
+ # port (0) it will be different each time and the previously stored data
# will not be loaded.
dir = "ddata"
-
+
# Size in bytes of the memory mapped file.
map-size = 100 MiB
-
+
# Accumulate changes before storing improves performance with the
# risk of losing the last writes if the JVM crashes.
# The interval is by default set to 'off' to write each update immediately.
- # Enabling write behind by specifying a duration, e.g. 200ms, is especially
- # efficient when performing many writes to the same key, because it is only
- # the last value for each key that will be serialized and stored.
+ # Enabling write behind by specifying a duration, e.g. 200ms, is especially
+ # efficient when performing many writes to the same key, because it is only
+ # the last value for each key that will be serialized and stored.
# write-behind-interval = 200 ms
write-behind-interval = off
}
}
-
+
}
#//#distributed-data
# Protobuf serializer for cluster DistributedData messages
-akka.actor {
+pekko.actor {
serializers {
- akka-data-replication = "akka.cluster.ddata.protobuf.ReplicatorMessageSerializer"
- akka-replicated-data = "akka.cluster.ddata.protobuf.ReplicatedDataSerializer"
+ pekko-data-replication = "org.apache.pekko.cluster.ddata.protobuf.ReplicatorMessageSerializer"
+ pekko-replicated-data = "org.apache.pekko.cluster.ddata.protobuf.ReplicatedDataSerializer"
}
serialization-bindings {
- "akka.cluster.ddata.Replicator$ReplicatorMessage" = akka-data-replication
- "akka.cluster.ddata.ReplicatedDataSerialization" = akka-replicated-data
+ "org.apache.pekko.cluster.ddata.Replicator$ReplicatorMessage" = pekko-data-replication
+ "org.apache.pekko.cluster.ddata.ReplicatedDataSerialization" = pekko-replicated-data
}
serialization-identifiers {
- "akka.cluster.ddata.protobuf.ReplicatedDataSerializer" = 11
- "akka.cluster.ddata.protobuf.ReplicatorMessageSerializer" = 12
+ "org.apache.pekko.cluster.ddata.protobuf.ReplicatedDataSerializer" = 11
+ "org.apache.pekko.cluster.ddata.protobuf.ReplicatorMessageSerializer" = 12
}
}
+# SPDX-License-Identifier: Apache-2.0
+
###########################################################
-# Akka Persistence Extension Reference Configuration File #
+# Pekko Persistence Extension Reference Configuration File #
###########################################################
# This is the reference config file that contains all the default settings.
# Make your edits in your application.conf in order to override these settings.
-# Directory of persistence journal and snapshot store plugins is available at the
-# Akka Community Projects page https://akka.io/community/
-
# Default persistence extension settings.
-akka.persistence {
+pekko.persistence {
# When starting many persistent actors at the same time the journal
# and its data store is protected from being overloaded by limiting number
# of recoveries that can be in progress at the same time. When
# exceeding the limit the actors will wait until other recoveries have
- # been completed.
+ # been completed.
max-concurrent-recoveries = 50
# Fully qualified class name providing a default internal stash overflow strategy.
- # It needs to be a subclass of akka.persistence.StashOverflowStrategyConfigurator.
+ # It needs to be a subclass of org.apache.pekko.persistence.StashOverflowStrategyConfigurator.
# The default strategy throws StashOverflowException.
- internal-stash-overflow-strategy = "akka.persistence.ThrowExceptionConfigurator"
+ internal-stash-overflow-strategy = "org.apache.pekko.persistence.ThrowExceptionConfigurator"
journal {
- # Absolute path to the journal plugin configuration entry used by
+ # Absolute path to the journal plugin configuration entry used by
# persistent actor by default.
- # Persistent actor can override `journalPluginId` method
+ # Persistent actor can override `journalPluginId` method
# in order to rely on a different journal plugin.
plugin = ""
# List of journal plugins to start automatically. Use "" for the default journal plugin.
# It is not mandatory to specify a snapshot store plugin.
# If you don't use snapshots you don't have to configure it.
# Note that Cluster Sharding is using snapshots, so if you
- # use Cluster Sharding you need to define a snapshot store plugin.
+ # use Cluster Sharding you need to define a snapshot store plugin.
plugin = ""
# List of snapshot stores to start automatically. Use "" for the default snapshot store.
auto-start-snapshot-stores = []
}
- # used as default-snapshot store if no plugin configured
- # (see `akka.persistence.snapshot-store`)
+ # used as default-snapshot store if no plugin configured
+ # (see `pekko.persistence.snapshot-store`)
no-snapshot-store {
- class = "akka.persistence.snapshot.NoSnapshotStore"
+ class = "org.apache.pekko.persistence.snapshot.NoSnapshotStore"
}
# Default reliable delivery settings.
at-least-once-delivery {
# Interval between re-delivery attempts.
redeliver-interval = 5s
- # Maximum number of unconfirmed messages that will be sent in one
+ # Maximum number of unconfirmed messages that will be sent in one
# re-delivery burst.
redelivery-burst-limit = 10000
- # After this number of delivery attempts a
+ # After this number of delivery attempts a
# `ReliableRedelivery.UnconfirmedWarning`, message will be sent to the actor.
warn-after-number-of-unconfirmed-attempts = 5
- # Maximum number of unconfirmed messages that an actor with
+ # Maximum number of unconfirmed messages that an actor with
# AtLeastOnceDelivery is allowed to hold in memory.
max-unconfirmed-messages = 100000
}
class = ""
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
# Dispatcher for message replay.
- replay-dispatcher = "akka.persistence.dispatchers.default-replay-dispatcher"
+ replay-dispatcher = "pekko.persistence.dispatchers.default-replay-dispatcher"
# Removed: used to be the Maximum size of a persistent message batch written to the journal.
# Now this setting is without function, PersistentActor will write as many messages
class = ""
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
circuit-breaker {
max-failures = 5
}
# Protobuf serialization for the persistent extension messages.
-akka.actor {
+pekko.actor {
serializers {
- akka-persistence-message = "akka.persistence.serialization.MessageSerializer"
- akka-persistence-snapshot = "akka.persistence.serialization.SnapshotSerializer"
+ pekko-persistence-message = "org.apache.pekko.persistence.serialization.MessageSerializer"
+ pekko-persistence-snapshot = "org.apache.pekko.persistence.serialization.SnapshotSerializer"
}
serialization-bindings {
- "akka.persistence.serialization.Message" = akka-persistence-message
- "akka.persistence.serialization.Snapshot" = akka-persistence-snapshot
+ "org.apache.pekko.persistence.serialization.Message" = pekko-persistence-message
+ "org.apache.pekko.persistence.serialization.Snapshot" = pekko-persistence-snapshot
}
serialization-identifiers {
- "akka.persistence.serialization.MessageSerializer" = 7
- "akka.persistence.serialization.SnapshotSerializer" = 8
+ "org.apache.pekko.persistence.serialization.MessageSerializer" = 7
+ "org.apache.pekko.persistence.serialization.SnapshotSerializer" = 8
}
}
###################################################
# In-memory journal plugin.
-akka.persistence.journal.inmem {
+pekko.persistence.journal.inmem {
# Class name of the plugin.
- class = "akka.persistence.journal.inmem.InmemJournal"
+ class = "org.apache.pekko.persistence.journal.inmem.InmemJournal"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.actor.default-dispatcher"
+ plugin-dispatcher = "pekko.actor.default-dispatcher"
# Turn this on to test serialization of the events
test-serialization = off
}
# Local file system snapshot store plugin.
-akka.persistence.snapshot-store.local {
+pekko.persistence.snapshot-store.local {
# Class name of the plugin.
- class = "akka.persistence.snapshot.local.LocalSnapshotStore"
+ class = "org.apache.pekko.persistence.snapshot.local.LocalSnapshotStore"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
# Dispatcher for streaming snapshot IO.
- stream-dispatcher = "akka.persistence.dispatchers.default-stream-dispatcher"
+ stream-dispatcher = "pekko.persistence.dispatchers.default-stream-dispatcher"
# Storage location of snapshot files.
dir = "snapshots"
# Number load attempts when recovering from the latest snapshot fails
# yet older snapshot files are available. Each recovery attempt will try
- # to recover using an older than previously failed-on snapshot file
+ # to recover using an older than previously failed-on snapshot file
# (if any are present). If all attempts fail the recovery will fail and
# the persistent actor will be stopped.
max-load-attempts = 3
}
# LevelDB journal plugin.
-# Note: this plugin requires explicit LevelDB dependency, see below.
-akka.persistence.journal.leveldb {
+# Note: this plugin requires explicit LevelDB dependency, see below.
+pekko.persistence.journal.leveldb {
# Class name of the plugin.
- class = "akka.persistence.journal.leveldb.LeveldbJournal"
+ class = "org.apache.pekko.persistence.journal.leveldb.LeveldbJournal"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ plugin-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
# Dispatcher for message replay.
- replay-dispatcher = "akka.persistence.dispatchers.default-replay-dispatcher"
+ replay-dispatcher = "pekko.persistence.dispatchers.default-replay-dispatcher"
# Storage location of LevelDB files.
dir = "journal"
# Use fsync on write.
}
# Shared LevelDB journal plugin (for testing only).
-# Note: this plugin requires explicit LevelDB dependency, see below.
-akka.persistence.journal.leveldb-shared {
+# Note: this plugin requires explicit LevelDB dependency, see below.
+pekko.persistence.journal.leveldb-shared {
# Class name of the plugin.
- class = "akka.persistence.journal.leveldb.SharedLeveldbJournal"
+ class = "org.apache.pekko.persistence.journal.leveldb.SharedLeveldbJournal"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.actor.default-dispatcher"
+ plugin-dispatcher = "pekko.actor.default-dispatcher"
# Timeout for async journal operations.
timeout = 10s
store {
# Dispatcher for shared store actor.
- store-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
+ store-dispatcher = "pekko.persistence.dispatchers.default-plugin-dispatcher"
# Dispatcher for message replay.
- replay-dispatcher = "akka.persistence.dispatchers.default-replay-dispatcher"
+ replay-dispatcher = "pekko.persistence.dispatchers.default-replay-dispatcher"
# Storage location of LevelDB files.
dir = "journal"
# Use fsync on write.
}
}
-akka.persistence.journal.proxy {
+pekko.persistence.journal.proxy {
# Class name of the plugin.
- class = "akka.persistence.journal.PersistencePluginProxy"
+ class = "org.apache.pekko.persistence.journal.PersistencePluginProxy"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.actor.default-dispatcher"
+ plugin-dispatcher = "pekko.actor.default-dispatcher"
# Set this to on in the configuration of the ActorSystem
# that will host the target journal
start-target-journal = off
init-timeout = 10s
}
-akka.persistence.snapshot-store.proxy {
+pekko.persistence.snapshot-store.proxy {
# Class name of the plugin.
- class = "akka.persistence.journal.PersistencePluginProxy"
+ class = "org.apache.pekko.persistence.journal.PersistencePluginProxy"
# Dispatcher for the plugin actor.
- plugin-dispatcher = "akka.actor.default-dispatcher"
+ plugin-dispatcher = "pekko.actor.default-dispatcher"
# Set this to on in the configuration of the ActorSystem
# that will host the target snapshot-store
start-target-snapshot-store = off
+# SPDX-License-Identifier: Apache-2.0
+
#//#shared
-#####################################
-# Akka Remote Reference Config File #
-#####################################
+######################################
+# Pekko Remote Reference Config File #
+######################################
# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your application.conf.
-# comments about akka.actor settings left out where they are already in akka-
+# comments about pekko.actor settings left out where they are already in pekko-
# actor.jar, because otherwise they would be repeated in config rendering.
#
# For the configuration of the new remoting implementation (Artery) please look
# at the bottom section of this file as it is listed separately.
-akka {
+pekko {
actor {
serializers {
- akka-containers = "akka.remote.serialization.MessageContainerSerializer"
- akka-misc = "akka.remote.serialization.MiscMessageSerializer"
- artery = "akka.remote.serialization.ArteryMessageSerializer"
- proto = "akka.remote.serialization.ProtobufSerializer"
- daemon-create = "akka.remote.serialization.DaemonMsgCreateSerializer"
- akka-system-msg = "akka.remote.serialization.SystemMessageSerializer"
+ pekko-containers = "org.apache.pekko.remote.serialization.MessageContainerSerializer"
+ pekko-misc = "org.apache.pekko.remote.serialization.MiscMessageSerializer"
+ artery = "org.apache.pekko.remote.serialization.ArteryMessageSerializer"
+ proto = "org.apache.pekko.remote.serialization.ProtobufSerializer"
+ daemon-create = "org.apache.pekko.remote.serialization.DaemonMsgCreateSerializer"
+ pekko-system-msg = "org.apache.pekko.remote.serialization.SystemMessageSerializer"
}
serialization-bindings {
- "akka.actor.ActorSelectionMessage" = akka-containers
+ "org.apache.pekko.actor.ActorSelectionMessage" = pekko-containers
- "akka.remote.DaemonMsgCreate" = daemon-create
+ "org.apache.pekko.remote.DaemonMsgCreate" = daemon-create
- "akka.remote.artery.ArteryMessage" = artery
+ "org.apache.pekko.remote.artery.ArteryMessage" = artery
- # Since akka.protobuf.Message does not extend Serializable but
- # GeneratedMessage does, need to use the more specific one here in order
- # to avoid ambiguity.
- # This is only loaded if akka-protobuf is on the classpath
- # It should not be used and users should migrate to using the protobuf classes
- # directly
- # Remove in 2.7
- "akka.protobuf.GeneratedMessage" = proto
-
- "akka.protobufv3.internal.GeneratedMessageV3" = proto
+ "org.apache.pekko.protobufv3.internal.GeneratedMessageV3" = proto
# Since com.google.protobuf.Message does not extend Serializable but
# GeneratedMessage does, need to use the more specific one here in order
"com.google.protobuf.GeneratedMessage" = proto
"com.google.protobuf.GeneratedMessageV3" = proto
- "akka.actor.Identify" = akka-misc
- "akka.actor.ActorIdentity" = akka-misc
- "scala.Some" = akka-misc
- "scala.None$" = akka-misc
- "java.util.Optional" = akka-misc
- "akka.actor.Status$Success" = akka-misc
- "akka.actor.Status$Failure" = akka-misc
- "akka.actor.ActorRef" = akka-misc
- "akka.actor.PoisonPill$" = akka-misc
- "akka.actor.Kill$" = akka-misc
- "akka.remote.RemoteWatcher$Heartbeat$" = akka-misc
- "akka.remote.RemoteWatcher$HeartbeatRsp" = akka-misc
- "akka.Done" = akka-misc
- "akka.NotUsed" = akka-misc
- "akka.actor.Address" = akka-misc
- "akka.remote.UniqueAddress" = akka-misc
-
- "akka.actor.ActorInitializationException" = akka-misc
- "akka.actor.IllegalActorStateException" = akka-misc
- "akka.actor.ActorKilledException" = akka-misc
- "akka.actor.InvalidActorNameException" = akka-misc
- "akka.actor.InvalidMessageException" = akka-misc
- "java.util.concurrent.TimeoutException" = akka-misc
- "akka.remote.serialization.ThrowableNotSerializableException" = akka-misc
-
- "akka.actor.LocalScope$" = akka-misc
- "akka.remote.RemoteScope" = akka-misc
-
- "com.typesafe.config.impl.SimpleConfig" = akka-misc
- "com.typesafe.config.Config" = akka-misc
-
- "akka.routing.FromConfig" = akka-misc
- "akka.routing.DefaultResizer" = akka-misc
- "akka.routing.BalancingPool" = akka-misc
- "akka.routing.BroadcastGroup" = akka-misc
- "akka.routing.BroadcastPool" = akka-misc
- "akka.routing.RandomGroup" = akka-misc
- "akka.routing.RandomPool" = akka-misc
- "akka.routing.RoundRobinGroup" = akka-misc
- "akka.routing.RoundRobinPool" = akka-misc
- "akka.routing.ScatterGatherFirstCompletedGroup" = akka-misc
- "akka.routing.ScatterGatherFirstCompletedPool" = akka-misc
- "akka.routing.SmallestMailboxPool" = akka-misc
- "akka.routing.TailChoppingGroup" = akka-misc
- "akka.routing.TailChoppingPool" = akka-misc
- "akka.remote.routing.RemoteRouterConfig" = akka-misc
-
- "akka.pattern.StatusReply" = akka-misc
-
- "akka.dispatch.sysmsg.SystemMessage" = akka-system-msg
+ "org.apache.pekko.actor.Identify" = pekko-misc
+ "org.apache.pekko.actor.ActorIdentity" = pekko-misc
+ "scala.Some" = pekko-misc
+ "scala.None$" = pekko-misc
+ "java.util.Optional" = pekko-misc
+ "org.apache.pekko.actor.Status$Success" = pekko-misc
+ "org.apache.pekko.actor.Status$Failure" = pekko-misc
+ "org.apache.pekko.actor.ActorRef" = pekko-misc
+ "org.apache.pekko.actor.PoisonPill$" = pekko-misc
+ "org.apache.pekko.actor.Kill$" = pekko-misc
+ "org.apache.pekko.remote.RemoteWatcher$Heartbeat$" = pekko-misc
+ "org.apache.pekko.remote.RemoteWatcher$HeartbeatRsp" = pekko-misc
+ "org.apache.pekko.Done" = pekko-misc
+ "org.apache.pekko.NotUsed" = pekko-misc
+ "org.apache.pekko.actor.Address" = pekko-misc
+ "org.apache.pekko.remote.UniqueAddress" = pekko-misc
+
+ "org.apache.pekko.actor.ActorInitializationException" = pekko-misc
+ "org.apache.pekko.actor.IllegalActorStateException" = pekko-misc
+ "org.apache.pekko.actor.ActorKilledException" = pekko-misc
+ "org.apache.pekko.actor.InvalidActorNameException" = pekko-misc
+ "org.apache.pekko.actor.InvalidMessageException" = pekko-misc
+ "java.util.concurrent.TimeoutException" = pekko-misc
+ "org.apache.pekko.remote.serialization.ThrowableNotSerializableException" = pekko-misc
+
+ "org.apache.pekko.actor.LocalScope$" = pekko-misc
+ "org.apache.pekko.remote.RemoteScope" = pekko-misc
+
+ "com.typesafe.config.impl.SimpleConfig" = pekko-misc
+ "com.typesafe.config.Config" = pekko-misc
+
+ "org.apache.pekko.routing.FromConfig" = pekko-misc
+ "org.apache.pekko.routing.DefaultResizer" = pekko-misc
+ "org.apache.pekko.routing.BalancingPool" = pekko-misc
+ "org.apache.pekko.routing.BroadcastGroup" = pekko-misc
+ "org.apache.pekko.routing.BroadcastPool" = pekko-misc
+ "org.apache.pekko.routing.RandomGroup" = pekko-misc
+ "org.apache.pekko.routing.RandomPool" = pekko-misc
+ "org.apache.pekko.routing.RoundRobinGroup" = pekko-misc
+ "org.apache.pekko.routing.RoundRobinPool" = pekko-misc
+ "org.apache.pekko.routing.ScatterGatherFirstCompletedGroup" = pekko-misc
+ "org.apache.pekko.routing.ScatterGatherFirstCompletedPool" = pekko-misc
+ "org.apache.pekko.routing.SmallestMailboxPool" = pekko-misc
+ "org.apache.pekko.routing.TailChoppingGroup" = pekko-misc
+ "org.apache.pekko.routing.TailChoppingPool" = pekko-misc
+ "org.apache.pekko.remote.routing.RemoteRouterConfig" = pekko-misc
+
+ "org.apache.pekko.pattern.StatusReply" = pekko-misc
+
+ "org.apache.pekko.dispatch.sysmsg.SystemMessage" = pekko-system-msg
# Java Serializer is by default used for exceptions and will by default
# not be allowed to be serialized, but in certain cases they are replaced
- # by `akka.remote.serialization.ThrowableNotSerializableException` if
+ # by `org.apache.pekko.remote.serialization.ThrowableNotSerializableException` if
# no specific serializer has been defined:
- # - when wrapped in `akka.actor.Status.Failure` for ask replies
+ # - when wrapped in `org.apache.pekko.actor.Status.Failure` for ask replies
# - when wrapped in system messages for exceptions from remote deployed child actors
#
# It's recommended that you implement custom serializer for exceptions that are
- # sent remotely, You can add binding to akka-misc (MiscMessageSerializer) for the
+ # sent remotely, You can add binding to pekko-misc (MiscMessageSerializer) for the
# exceptions that have a constructor with single message String or constructor with
# message String as first parameter and cause Throwable as second parameter. Note that it's not
# safe to add this binding for general exceptions such as IllegalArgumentException
}
serialization-identifiers {
- "akka.remote.serialization.ProtobufSerializer" = 2
- "akka.remote.serialization.DaemonMsgCreateSerializer" = 3
- "akka.remote.serialization.MessageContainerSerializer" = 6
- "akka.remote.serialization.MiscMessageSerializer" = 16
- "akka.remote.serialization.ArteryMessageSerializer" = 17
-
- "akka.remote.serialization.SystemMessageSerializer" = 22
-
- # deprecated in 2.6.0, moved to akka-actor
- "akka.remote.serialization.LongSerializer" = 18
- # deprecated in 2.6.0, moved to akka-actor
- "akka.remote.serialization.IntSerializer" = 19
- # deprecated in 2.6.0, moved to akka-actor
- "akka.remote.serialization.StringSerializer" = 20
- # deprecated in 2.6.0, moved to akka-actor
- "akka.remote.serialization.ByteStringSerializer" = 21
+ "org.apache.pekko.remote.serialization.ProtobufSerializer" = 2
+ "org.apache.pekko.remote.serialization.DaemonMsgCreateSerializer" = 3
+ "org.apache.pekko.remote.serialization.MessageContainerSerializer" = 6
+ "org.apache.pekko.remote.serialization.MiscMessageSerializer" = 16
+ "org.apache.pekko.remote.serialization.ArteryMessageSerializer" = 17
+
+ "org.apache.pekko.remote.serialization.SystemMessageSerializer" = 22
+
+ # deprecated in Akka 2.6.0, moved to pekko-actor
+ "org.apache.pekko.remote.serialization.LongSerializer" = 18
+ # deprecated in Akka 2.6.0, moved to pekko-actor
+ "org.apache.pekko.remote.serialization.IntSerializer" = 19
+ # deprecated in Akka 2.6.0, moved to pekko-actor
+ "org.apache.pekko.remote.serialization.StringSerializer" = 20
+ # deprecated in Akka 2.6.0, moved to pekko-actor
+ "org.apache.pekko.remote.serialization.ByteStringSerializer" = 21
}
deployment {
default {
# if this is set to a valid remote address, the named actor will be
- # deployed at that node e.g. "akka://sys@host:port"
+ # deployed at that node e.g. "pekko://sys@host:port"
remote = ""
target {
# A list of hostnames and ports for instantiating the children of a
# router
- # The format should be on "akka://sys@host:port", where:
+ # The format should be on "pekko://sys@host:port", where:
# - sys is the remote actor system name
# - hostname can be either hostname or IP address the remote actor
# should connect to
# is 'off'. Set this to 'off' to suppress these.
warn-unsafe-watch-outside-cluster = on
+ # When receiving requests from other remote actors, what are the valid
+ # prefixes to check against. Useful for when dealing with rolling cluster
+ # migrations with compatible systems such as Lightbend's Akka.
+ # By default, we only support "pekko" protocol.
+ # If you want to also support Akka, change this config to:
+ # pekko.remote.accept-protocol-names = ["pekko", "akka"]
+ # A ConfigurationException will be thrown at runtime if the array is empty
+ # or contains values other than "pekko" and/or "akka".
+ accept-protocol-names = ["pekko"]
+
+ # The protocol name to use when sending requests to other remote actors.
+ # Useful when dealing with rolling migration, i.e. temporarily change
+ # the protocol name to match another compatible actor implementation
+ # such as Lightbend's "akka" (whilst making sure accept-protocol-names
+ # contains "akka") so that you can gracefully migrate all nodes to Apache
+ # Pekko and then change the protocol-name back to "pekko" once all
+ # nodes have been are running on Apache Pekko.
+ # A ConfigurationException will be thrown at runtime if the value is not
+ # set to "pekko" or "akka".
+ protocol-name = "pekko"
+
+ # When pekko.remote.accept-protocol-names contains "akka", then we
+ # need to know the Akka version. If you include the Akka jars on the classpath,
+ # we can use the akka.version from their configuration. This configuration
+ # setting is only used if we can't find an akka.version setting.
+ akka.version = "2.6.21"
+
# Settings for the Phi accrual failure detector (http://www.jaist.ac.jp/~defago/files/pdf/IS_RR_2004_010.pdf
# [Hayashibara et al]) used for remote death watch.
# The default PhiAccrualFailureDetector will trigger if there are no heartbeats within
watch-failure-detector {
# FQCN of the failure detector implementation.
- # It must implement akka.remote.FailureDetector and have
+ # It must implement org.apache.pekko.remote.FailureDetector and have
# a public constructor with a com.typesafe.config.Config and
- # akka.actor.EventStream parameter.
- implementation-class = "akka.remote.PhiAccrualFailureDetector"
+ # org.apache.pekko.actor.EventStream parameter.
+ implementation-class = "org.apache.pekko.remote.PhiAccrualFailureDetector"
# How often keep-alive heartbeat messages should be sent to each connection.
heartbeat-interval = 1 s
unreachable-nodes-reaper-interval = 1s
# After the heartbeat request has been sent the first failure detection
- # will start after this period, even though no heartbeat mesage has
+ # will start after this period, even though no heartbeat message has
# been received.
expected-response-after = 1 s
# deprecated, use `enable-allow-list`
enable-whitelist = off
- # If true, will only allow specific classes listed in `allowed-actor-classes` to be instanciated on this
+ # If true, will only allow specific classes listed in `allowed-actor-classes` to be instantiated on this
# system via remote deployment
- enable-allow-list = ${akka.remote.deployment.enable-whitelist}
+ enable-allow-list = ${pekko.remote.deployment.enable-whitelist}
# deprecated, use `allowed-actor-classes`
whitelist = []
- allowed-actor-classes = ${akka.remote.deployment.whitelist}
+ allowed-actor-classes = ${pekko.remote.deployment.whitelist}
}
### Default dispatcher for the remoting subsystem
}
-akka {
+pekko {
remote {
#//#classic
### Configuration for classic remoting. Classic remoting is deprecated, use artery.
+ # Used as part of the Actor name for the Protocol Manager.
+ manager-name-prefix = "pekkoprotocolmanager"
# If set to a nonempty string remoting will use the given dispatcher for
# its internal actors otherwise the default dispatcher is used. Please note
# that since remoting can load arbitrary 3rd party drivers (see
# "enabled-transport" and "adapters" entries) it is not guaranteed that
# every module will respect this setting.
- use-dispatcher = "akka.remote.default-remote-dispatcher"
+ use-dispatcher = "pekko.remote.default-remote-dispatcher"
# Settings for the failure detector to monitor connections.
# For TCP it is not important to have fast failure detection, since
transport-failure-detector {
# FQCN of the failure detector implementation.
- # It must implement akka.remote.FailureDetector and have
+ # It must implement org.apache.pekko.remote.FailureDetector and have
# a public constructor with a com.typesafe.config.Config and
- # akka.actor.EventStream parameter.
- implementation-class = "akka.remote.DeadlineFailureDetector"
+ # org.apache.pekko.actor.EventStream parameter.
+ implementation-class = "org.apache.pekko.remote.DeadlineFailureDetector"
# How often keep-alive heartbeat messages should be sent to each connection.
heartbeat-interval = 4 s
# enabled-transports section) need longer time to be loaded.
startup-timeout = 10 s
- # Timout after which the graceful shutdown of the remoting subsystem is
+ # Timeout after which the graceful shutdown of the remoting subsystem is
# considered to be failed. After the timeout the remoting system is
# forcefully shut down. Increase this value if your transport drivers
# (see the enabled-transports section) need longer time to stop properly.
command-ack-timeout = 30 s
# The timeout for outbound associations to perform the handshake.
- # If the transport is akka.remote.classic.netty.tcp or akka.remote.classic.netty.ssl
+ # If the transport is pekko.remote.classic.netty.tcp or pekko.remote.classic.netty.ssl
# the configured connection-timeout for the transport will be used instead.
handshake-timeout = 15 s
### Logging
- # If this is "on", Akka will log all inbound messages at DEBUG level,
+ # If this is "on", Pekko will log all inbound messages at DEBUG level,
# if off then they are not logged
log-received-messages = off
- # If this is "on", Akka will log all outbound messages at DEBUG level,
+ # If this is "on", Pekko will log all outbound messages at DEBUG level,
# if off then they are not logged
log-sent-messages = off
- # Sets the log granularity level at which Akka logs remoting events. This setting
+ # Sets the log granularity level at which Pekko logs remoting events. This setting
# can take the values OFF, ERROR, WARNING, INFO, DEBUG, or ON. For compatibility
# reasons the setting "on" will default to "debug" level. Please note that the effective
# logging level is still determined by the global logging level of the actor system:
# pointing to an implementation class of the Transport interface.
# If multiple transports are provided, the address of the first
# one will be used as a default address.
- enabled-transports = ["akka.remote.classic.netty.tcp"]
+ enabled-transports = ["pekko.remote.classic.netty.tcp"]
# Transport drivers can be augmented with adapters by adding their
# name to the applied-adapters setting in the configuration of a
# transport. The available adapters should be configured in this
# section by providing a name, and the fully qualified name of
# their corresponding implementation. The class given here
- # must implement akka.akka.remote.transport.TransportAdapterProvider
+ # must implement org.apache.pekko.remote.transport.TransportAdapterProvider
# and have public constructor without parameters.
adapters {
- gremlin = "akka.remote.transport.FailureInjectorProvider"
- trttl = "akka.remote.transport.ThrottlerProvider"
+ gremlin = "org.apache.pekko.remote.transport.FailureInjectorProvider"
+ trttl = "org.apache.pekko.remote.transport.ThrottlerProvider"
}
### Default configuration for the Netty based transport drivers
netty.tcp {
- # The class given here must implement the akka.remote.transport.Transport
+ # The class given here must implement the org.apache.pekko.remote.transport.Transport
# interface and offer a public constructor which takes two arguments:
- # 1) akka.actor.ExtendedActorSystem
+ # 1) org.apache.pekko.actor.ExtendedActorSystem
# 2) com.typesafe.config.Config
- transport-class = "akka.remote.transport.netty.NettyTransport"
+ transport-class = "org.apache.pekko.remote.transport.netty.NettyTransport"
# Transport drivers can be augmented with adapters by adding their
# name to the applied-adapters list. The last adapter in the
# list is the adapter immediately above the driver, while
# the first one is the top of the stack below the standard
- # Akka protocol
+ # Pekko protocol
applied-adapters = []
# The default remote server port clients should connect to.
- # Default is 2552 (AKKA), use 0 if you want a random available port
+ # Default is 7355 (PEKK on a telephone keypad), use 0 if you want a random available port
# This port needs to be unique for each actor system on the same machine.
- port = 2552
+ port = 7355
# The hostname or ip clients should connect to.
# InetAddress.getLocalHost.getHostAddress is used if empty
# Use this setting to bind a network interface to a different port
# than remoting protocol expects messages at. This may be used
- # when running akka nodes in a separated networks (under NATs or docker containers).
+ # when running pekko nodes in a separated networks (under NATs or docker containers).
# Use 0 if you want a random available port. Examples:
#
- # akka.remote.classic.netty.tcp.port = 2552
- # akka.remote.classic.netty.tcp.bind-port = 2553
- # Network interface will be bound to the 2553 port, but remoting protocol will
- # expect messages sent to port 2552.
+ # pekko.remote.classic.netty.tcp.port = 7355
+ # pekko.remote.classic.netty.tcp.bind-port = 7356
+ # Network interface will be bound to the 7356 port, but remoting protocol will
+ # expect messages sent to port 7355.
#
- # akka.remote.classic.netty.tcp.port = 0
- # akka.remote.classic.netty.tcp.bind-port = 0
+ # pekko.remote.classic.netty.tcp.port = 0
+ # pekko.remote.classic.netty.tcp.bind-port = 0
# Network interface will be bound to a random port, and remoting protocol will
# expect messages sent to the bound port.
#
- # akka.remote.classic.netty.tcp.port = 2552
- # akka.remote.classic.netty.tcp.bind-port = 0
+ # pekko.remote.classic.netty.tcp.port = 7355
+ # pekko.remote.classic.netty.tcp.bind-port = 0
# Network interface will be bound to a random port, but remoting protocol will
- # expect messages sent to port 2552.
+ # expect messages sent to port 7355.
#
- # akka.remote.classic.netty.tcp.port = 0
- # akka.remote.classic.netty.tcp.bind-port = 2553
- # Network interface will be bound to the 2553 port, and remoting protocol will
+ # pekko.remote.classic.netty.tcp.port = 0
+ # pekko.remote.classic.netty.tcp.bind-port = 7356
+ # Network interface will be bound to the 7356 port, and remoting protocol will
# expect messages sent to the bound port.
#
- # akka.remote.classic.netty.tcp.port = 2552
- # akka.remote.classic.netty.tcp.bind-port = ""
- # Network interface will be bound to the 2552 port, and remoting protocol will
+ # pekko.remote.classic.netty.tcp.port = 7355
+ # pekko.remote.classic.netty.tcp.bind-port = ""
+ # Network interface will be bound to the 7355 port, and remoting protocol will
# expect messages sent to the bound port.
#
- # akka.remote.classic.netty.tcp.port if empty
+ # pekko.remote.classic.netty.tcp.port if empty
bind-port = ""
# Use this setting to bind a network interface to a different hostname or ip
# than remoting protocol expects messages at.
# Use "0.0.0.0" to bind to all interfaces.
- # akka.remote.classic.netty.tcp.hostname if empty
+ # pekko.remote.classic.netty.tcp.hostname if empty
bind-hostname = ""
# Enables SSL support on this transport
# will be used to accept inbound connections, and perform IO. If "" then
# dedicated threads will be used.
# Please note that the Netty driver only uses this configuration and does
- # not read the "akka.remote.use-dispatcher" entry. Instead it has to be
+ # not read the "pekko.remote.use-dispatcher" entry. Instead it has to be
# configured manually to point to the same dispatcher if needed.
use-dispatcher-for-io = ""
# Enables the TCP_NODELAY flag, i.e. disables Nagle’s algorithm
tcp-nodelay = on
- # Enables TCP Keepalive, subject to the O/S kernel’s configuration
+ # Enables TCP Keep-alive, subject to the O/S kernel’s configuration
tcp-keepalive = on
# Enables SO_REUSEADDR, which determines when an ActorSystem can open
}
- netty.ssl = ${akka.remote.classic.netty.tcp}
+ netty.ssl = ${pekko.remote.classic.netty.tcp}
netty.ssl = {
# Enable SSL/TLS encryption.
# This must be enabled on both the client and server to work.
enable-ssl = true
# Factory of SSLEngine.
- # Must implement akka.remote.transport.netty.SSLEngineProvider and have a public
+ # Must implement org.apache.pekko.remote.transport.netty.SSLEngineProvider and have a public
# constructor with an ActorSystem parameter.
# The default ConfigSSLEngineProvider is configured by properties in section
- # akka.remote.classic.netty.ssl.security
+ # pekko.remote.classic.netty.ssl.security
#
# The SSLEngineProvider can also be defined via ActorSystemSetup with
# SSLEngineProviderSetup when starting the ActorSystem. That is useful when
# the SSLEngineProvider implementation requires other external constructor
# parameters or is created before the ActorSystem is created.
# If such SSLEngineProviderSetup is defined this config property is not used.
- ssl-engine-provider = akka.remote.transport.netty.ConfigSSLEngineProvider
+ ssl-engine-provider = org.apache.pekko.remote.transport.netty.ConfigSSLEngineProvider
security {
# This is the Java Key Store used by the server connection
# Protocol to use for SSL encryption.
protocol = "TLSv1.2"
- # Example: ["TLS_DHE_RSA_WITH_AES_128_GCM_SHA256",
+ # Example: ["TLS_DHE_RSA_WITH_AES_128_GCM_SHA256",
# "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
# "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384",
# "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"]
- # When doing rolling upgrades, make sure to include both the algorithm used
+ # When doing rolling upgrades, make sure to include both the algorithm used
# by old nodes and the preferred algorithm.
# If you use a JDK 8 prior to 8u161 you need to install
# the JCE Unlimited Strength Jurisdiction Policy Files to use AES 256.
# the passive side will also request and verify a certificate from the connecting peer.
#
# To prevent man-in-the-middle attacks this setting is enabled by default.
- #
- # Note: Nodes that are configured with this setting to 'on' might not be able to receive messages from nodes that
- # run on older versions of akka-remote. This is because in versions of Akka < 2.4.12 the active side of the remoting
- # connection will not send over certificates even if asked.
- #
- # However, starting with Akka 2.4.12, even with this setting "off", the active side (TLS client side)
- # will use the given key-store to send over a certificate if asked. A rolling upgrade from versions of
- # Akka < 2.4.12 can therefore work like this:
- # - upgrade all nodes to an Akka version >= 2.4.12, in the best case the latest version, but keep this setting at "off"
- # - then switch this flag to "on" and do again a rolling upgrade of all nodes
- # The first step ensures that all nodes will send over a certificate when asked to. The second
- # step will ensure that all nodes finally enforce the secure checking of client certificates.
require-mutual-authentication = on
}
}
#//#classic
#//#artery
-akka {
+pekko {
remote {
# Select the underlying transport implementation.
#
# Possible values: aeron-udp, tcp, tls-tcp
- # See https://doc.akka.io/docs/akka/current/remoting-artery.html#selecting-a-transport for the tradeoffs
+ # See https://pekko.apache.org/docs/pekko/current/remoting-artery.html#selecting-a-transport for the tradeoffs
# for each transport
transport = tcp
canonical {
# The default remote server port clients should connect to.
- # Default is 25520, use 0 if you want a random available port
+ # Default is 17355, use 0 if you want a random available port
# This port needs to be unique for each actor system on the same machine.
- port = 25520
+ port = 17355
# Hostname clients should connect to. Can be set to an ip, hostname
# or one of the following special values:
}
# Use these settings to bind a network interface to a different address
- # than artery expects messages at. This may be used when running Akka
+ # than artery expects messages at. This may be used when running Pekko
# nodes in a separated networks (under NATs or in containers). If canonical
# and bind addresses are different, then network configuration that relays
# communications from canonical to bind addresses is expected.
# Port to bind a network interface to. Can be set to a port number
# of one of the following special values:
# 0 random available port
- # "" akka.remote.artery.canonical.port
+ # "" pekko.remote.artery.canonical.port
#
port = ""
# Hostname to bind a network interface to. Can be set to an ip, hostname
# or one of the following special values:
# "0.0.0.0" all interfaces
- # "" akka.remote.artery.canonical.hostname
+ # "" pekko.remote.artery.canonical.hostname
# "<getHostAddress>" InetAddress.getLocalHost.getHostAddress
# "<getHostName>" InetAddress.getLocalHost.getHostName
#
buffer-pool-size = 128
# Maximum serialized message size for the large messages, including header data.
- # If the value of akka.remote.artery.transport is set to aeron-udp, it is currently
+ # If the value of pekko.remote.artery.transport is set to aeron-udp, it is currently
# restricted to 1/8th the size of a term buffer that can be configured by setting the
# 'aeron.term.buffer.length' system property.
# See 'large-message-destinations'.
# collected, which is not as efficient as reusing buffers in the pool.
large-buffer-pool-size = 32
- # For enabling testing features, such as blackhole in akka-remote-testkit.
+ # For enabling testing features, such as blackhole in pekko-remote-testkit.
test-mode = off
# Settings for the materializer that is used for the remote streams.
- materializer = ${akka.stream.materializer}
+ materializer = ${pekko.stream.materializer}
# Remoting will use the given dispatcher for the ordinary and large message
# streams.
- use-dispatcher = "akka.remote.default-remote-dispatcher"
+ use-dispatcher = "pekko.remote.default-remote-dispatcher"
# Remoting will use the given dispatcher for the control stream.
# It can be good to not use the same dispatcher for the control stream as
# the dispatcher for the ordinary message stream so that heartbeat messages
# are not disturbed.
- use-control-stream-dispatcher = "akka.actor.internal-dispatcher"
+ use-control-stream-dispatcher = "pekko.actor.internal-dispatcher"
# Total number of inbound lanes, shared among all inbound associations. A value
# the queue becomes full. This may happen if you send a burst of many messages
# without end-to-end flow control. Note that there is one such queue per
# outbound association. The trade-off of using a larger queue size is that
- # it consumes more memory, since the queue is based on preallocated array with
+ # it consumes more memory, since the queue is based on pre-allocated array with
# fixed size.
outbound-message-queue-size = 3072
# need to survive.
# The value must also be greater than stop-idle-outbound-after.
# Once every 1/10 of this duration an extra handshake message will be sent.
- # Therfore it's also recommended to use a value that is greater than 10 times
+ # Therefore it's also recommended to use a value that is greater than 10 times
# the stop-idle-outbound-after, since otherwise the idle streams will not be
# stopped.
quarantine-idle-outbound-after = 6 hours
# and also unused for this duration before it's removed. When removed the historical
# information about which UIDs that were quarantined for that hostname:port is
# gone which could result in communication with a previously quarantined node
- # if it wakes up again. Therfore this shouldn't be set too low.
+ # if it wakes up again. Therefore this shouldn't be set too low.
remove-quarantined-association-after = 1 h
# during ActorSystem termination the remoting will wait this long for
# remote messages has been completed
shutdown-flush-timeout = 1 second
- # Before sending notificaiton of terminated actor (DeathWatchNotification) other messages
+ # Before sending notification of terminated actor (DeathWatchNotification) other messages
# will be flushed to make sure that the Terminated message arrives after other messages.
# It will wait this long for the flush acknowledgement before continuing.
# The flushing can be disabled by setting this to `off`.
# List of fully qualified class names of remote instruments which should
# be initialized and used for monitoring of remote messages.
- # The class must extend akka.remote.artery.RemoteInstrument and
+ # The class must extend org.apache.pekko.remote.artery.RemoteInstrument and
# have a public constructor with empty parameters or one ExtendedActorSystem
# parameter.
# A new instance of RemoteInstrument will be created for each encoder and decoder.
- # It's only called from the stage, so if it dosn't delegate to any shared instance
+ # It's only called from the stage, so if it doesn't delegate to any shared instance
# it doesn't have to be thread-safe.
- # Refer to `akka.remote.artery.RemoteInstrument` for more information.
- instruments = ${?akka.remote.artery.advanced.instruments} []
+ # Refer to `org.apache.pekko.remote.artery.RemoteInstrument` for more information.
+ instruments = ${?pekko.remote.artery.advanced.instruments} []
# Only used when transport is aeron-udp
aeron {
# Only used when transport is aeron-udp.
client-liveness-timeout = 20 seconds
- # Timout after after which an uncommitted publication will be unblocked
+ # Timeout after after which an uncommitted publication will be unblocked
# Only used when transport is aeron-udp.
publication-unblock-timeout = 40 seconds
# SSL configuration that is used when transport=tls-tcp.
ssl {
# Factory of SSLEngine.
- # Must implement akka.remote.artery.tcp.SSLEngineProvider and have a public
+ # Must implement org.apache.pekko.remote.artery.tcp.SSLEngineProvider and have a public
# constructor with an ActorSystem parameter.
# The default ConfigSSLEngineProvider is configured by properties in section
- # akka.remote.artery.ssl.config-ssl-engine
- ssl-engine-provider = akka.remote.artery.tcp.ConfigSSLEngineProvider
+ # pekko.remote.artery.ssl.config-ssl-engine
+ ssl-engine-provider = org.apache.pekko.remote.artery.tcp.ConfigSSLEngineProvider
- # Config of akka.remote.artery.tcp.ConfigSSLEngineProvider
+ # Config of org.apache.pekko.remote.artery.tcp.ConfigSSLEngineProvider
config-ssl-engine {
# This is the Java Key Store used by the server connection
# Protocol to use for SSL encryption.
protocol = "TLSv1.2"
- # Example: ["TLS_DHE_RSA_WITH_AES_128_GCM_SHA256",
+ # Example: ["TLS_DHE_RSA_WITH_AES_128_GCM_SHA256",
# "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
# "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384",
# "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"]
- # When doing rolling upgrades, make sure to include both the algorithm used
+ # When doing rolling upgrades, make sure to include both the algorithm used
# by old nodes and the preferred algorithm.
# If you use a JDK 8 prior to 8u161 you need to install
# the JCE Unlimited Strength Jurisdiction Policy Files to use AES 256.
hostname-verification = off
}
- # Config of akka.remote.artery.tcp.ssl.RotatingKeysSSLEngineProvider
+ # Config of org.apache.pekko.remote.artery.tcp.ssl.RotatingKeysSSLEngineProvider
# This engine provider reads PEM files from a mount point shared with the secret
# manager. The constructed SSLContext is cached some time (configurable) so when
# the credentials rotate the new credentials are eventually picked up.
# By default mTLS is enabled.
# This provider also includes a verification phase that runs after the TLS handshake
# phase. In this verification, both peers run an authorization and verify they are
- # part of the same akka cluster. The verification happens via comparing the subject
+ # part of the same pekko cluster. The verification happens via comparing the subject
# names in the peer's certificate with the name on the own certificate so if you
# use this SSLEngineProvider you should make sure all nodes on the cluster include
# at least one common subject name (CN or SAN).
# The Key setup this implementation supports has some limitations:
# 1. the private key must be provided on a PKCS#1 or a non-encrypted PKCS#8 PEM-formatted file
- # 2. the private key must be be of an algorythm supported by `akka-pki` tools (e.g. "RSA", not "EC")
+ # 2. the private key must be be of an algorithm supported by `pekko-pki` tools (e.g. "RSA", not "EC")
# 3. the node certificate must be issued by a root CA (not an intermediate CA)
# 4. both the node and the CA certificates must be provided in PEM-formatted files
rotating-keys-engine {
# This is a convention that people may follow if they wish to save themselves some configuration
- secret-mount-point = /var/run/secrets/akka-tls/rotating-keys-engine
+ secret-mount-point = /var/run/secrets/pekko-tls/rotating-keys-engine
# The absolute path the PEM file with the private key.
- key-file = ${akka.remote.artery.ssl.rotating-keys-engine.secret-mount-point}/tls.key
+ key-file = ${pekko.remote.artery.ssl.rotating-keys-engine.secret-mount-point}/tls.key
# The absolute path to the PEM file of the certificate for the private key above.
- cert-file = ${akka.remote.artery.ssl.rotating-keys-engine.secret-mount-point}/tls.crt
- # The absolute path to the PEM file of the certificate of the CA that emited
+ cert-file = ${pekko.remote.artery.ssl.rotating-keys-engine.secret-mount-point}/tls.crt
+ # The absolute path to the PEM file of the certificate of the CA that emitted
# the node certificate above.
- ca-cert-file = ${akka.remote.artery.ssl.rotating-keys-engine.secret-mount-point}/ca.crt
+ ca-cert-file = ${pekko.remote.artery.ssl.rotating-keys-engine.secret-mount-point}/ca.crt
# There are two options, and the default SecureRandom is recommended:
# "" or "SecureRandom" => (default)
-#####################################
-# Akka Stream Reference Config File #
-#####################################
+# SPDX-License-Identifier: Apache-2.0
+
+######################################
+# Pekko Stream Reference Config File #
+######################################
# eager creation of the system wide materializer
-akka.library-extensions += "akka.stream.SystemMaterializer$"
-akka {
+pekko.library-extensions += "org.apache.pekko.stream.SystemMaterializer$"
+pekko {
stream {
# Default materializer settings
# Fully qualified config path which holds the dispatcher configuration
# or full dispatcher configuration to be used by ActorMaterializer when creating Actors.
- dispatcher = "akka.actor.default-dispatcher"
+ dispatcher = "pekko.actor.default-dispatcher"
+
+ # FQCN of the MailboxType. The Class of the FQCN must have a public
+ # constructor with
+ # (org.apache.pekko.actor.ActorSystem.Settings, com.typesafe.config.Config) parameters.
+ # defaults to the single consumer mailbox for better performance.
+ mailbox {
+ mailbox-type = "org.apache.pekko.dispatch.SingleConsumerOnlyUnboundedMailbox"
+ }
# Fully qualified config path which holds the dispatcher configuration
# or full dispatcher configuration to be used by stream operators that
# perform blocking operations
- blocking-io-dispatcher = "akka.actor.default-blocking-io-dispatcher"
+ blocking-io-dispatcher = "pekko.actor.default-blocking-io-dispatcher"
# Cleanup leaked publishers and subscribers when they are not used within a given
# deadline
mode = cancel
# time after which a subscriber / publisher is considered stale and eligible
- # for cancelation (see `akka.stream.subscription-timeout.mode`)
+ # for cancellation (see `pekko.stream.subscription-timeout.mode`)
timeout = 5s
}
//#stream-ref
}
- # Deprecated, left here to not break Akka HTTP which refers to it
- blocking-io-dispatcher = "akka.actor.default-blocking-io-dispatcher"
+ # Deprecated, left here to not break Pekko HTTP which refers to it
+ blocking-io-dispatcher = "pekko.actor.default-blocking-io-dispatcher"
- # Deprecated, will not be used unless user code refer to it, use 'akka.stream.materializer.blocking-io-dispatcher'
+ # Deprecated, will not be used unless user code refer to it, use 'pekko.stream.materializer.blocking-io-dispatcher'
# instead, or if from code, prefer the 'ActorAttributes.IODispatcher' attribute
- default-blocking-io-dispatcher = "akka.actor.default-blocking-io-dispatcher"
+ default-blocking-io-dispatcher = "pekko.actor.default-blocking-io-dispatcher"
}
- # configure overrides to ssl-configuration here (to be used by akka-streams, and akka-http – i.e. when serving https connections)
+ # configure overrides to ssl-configuration here (to be used by pekko-streams, and pekko-http – i.e. when serving https connections)
ssl-config {
protocol = "TLSv1.2"
}
actor {
serializers {
- akka-stream-ref = "akka.stream.serialization.StreamRefSerializer"
+ pekko-stream-ref = "org.apache.pekko.stream.serialization.StreamRefSerializer"
}
serialization-bindings {
- "akka.stream.SinkRef" = akka-stream-ref
- "akka.stream.SourceRef" = akka-stream-ref
- "akka.stream.impl.streamref.StreamRefsProtocol" = akka-stream-ref
+ "org.apache.pekko.stream.SinkRef" = pekko-stream-ref
+ "org.apache.pekko.stream.SourceRef" = pekko-stream-ref
+ "org.apache.pekko.stream.impl.streamref.StreamRefsProtocol" = pekko-stream-ref
}
serialization-identifiers {
- "akka.stream.serialization.StreamRefSerializer" = 30
+ "org.apache.pekko.stream.serialization.StreamRefSerializer" = 30
}
}
}
# ssl configuration
-# folded in from former ssl-config-akka module
+# folded in from former ssl-config-pekko module
ssl-config {
- logger = "com.typesafe.sslconfig.akka.util.AkkaLoggerBridge"
+ logger = "com.typesafe.sslconfig.pekko.util.PekkoLoggerBridge"
}
<relativePath>../../bundle-parent</relativePath>
</parent>
- <artifactId>repackaged-akka</artifactId>
+ <artifactId>repackaged-pekko</artifactId>
<packaging>bundle</packaging>
<name>${project.artifactId}</name>
<dependencies>
<dependency>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka-jar</artifactId>
+ <artifactId>repackaged-pekko-jar</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
<execution>
<id>unpack-license</id>
<configuration>
- <!-- Akka is Apache-2.0 licensed -->
+ <!-- Pekko is Apache-2.0 licensed -->
<skip>true</skip>
</configuration>
</execution>
<artifactItems>
<artifactItem>
<groupId>org.opendaylight.controller</groupId>
- <artifactId>repackaged-akka-jar</artifactId>
+ <artifactId>repackaged-pekko-jar</artifactId>
<version>${project.version}</version>
</artifactItem>
<artifactItem>
<groupId>com.hierynomus</groupId>
<artifactId>asn-one</artifactId>
- <version>0.4.0</version>
+ <version>0.5.0</version>
</artifactItem>
</artifactItems>
<overWriteReleases>false</overWriteReleases>
</goals>
<configuration>
<classifier>sources</classifier>
- <includeArtifactIds>repackaged-akka-jar</includeArtifactIds>
+ <includeArtifactIds>repackaged-pekko-jar</includeArtifactIds>
<outputDirectory>${project.build.directory}/shaded-sources</outputDirectory>
</configuration>
</execution>
<id>shaded-sources</id>
<phase>prepare-package</phase>
<goals>
- <goal>add-source</goal>
+ <goal>add-source</goal>
</goals>
<configuration>
<sources>${project.build.directory}/shaded-sources</sources>
<extensions>true</extensions>
<configuration>
<instructions>
- <Automatic-Module-Name>org.opendaylight.controller.repackaged.akka</Automatic-Module-Name>
+ <Automatic-Module-Name>org.opendaylight.controller.repackaged.pekko</Automatic-Module-Name>
<Export-Package>
- akka.*,
+ org.apache.pekko.*,
com.typesafe.sslconfig.akka.*,
jdk.jfr,
</Export-Package>
<module>features</module>
<module>karaf</module>
- <module>akka</module>
+ <module>pekko</module>
<module>atomix-storage</module>
<module>bundle-parent</module>
<module>benchmark</module>