a

Clarify the difference between outputs, results, and benefits – Java Code Geeks



When I sent my newsletter last month, Modern management: do you want valuable results? Create general goals, several readers have asked me questions. Why did I distinguish between outputs, outcomes and benefits? I decided it was well worth a blog post.

Here is how I define and use the terms.

The exits

By itself, a client cannot use an exit. We could use them as a team. But they are incomplete. Some examples:

  • Tasks in a user story, such as “designing a database schema”.
  • Or “write a test plan”.
  • Words matter to writers. (I keep track of word counts, but you don’t see everything I write.)

We may need these tasks, but no one outside of the team can use them.

If you always use phases in your projects, anything you designate as “done” is an outcome, not an outcome. Why? Because until you finish testing, you don’t know it’s all over. (See “Done” and “Freeze” are not for more details.)

It’s the same idea with tasks in a user story. You may need to create a test plan or design a schematic. But none of these tasks offer a benefit to the user.

These tasks also do not benefit the team, unless the team collaborates to accomplish this work. Why? Because part of the value of these artifacts lies in the discussion to create them, not just in the artifacts themselves.

The results can help your team. But your customers are using your results.

Results

We can learn from the results, whether it’s about the risks or how people use our work. We cannot learn from exits because the exits are incomplete by themselves.

Which people are learning? Any of the following: end users, buyers / customers, or other team. I have given examples of a variety of product-based results in Consider product options with minimal results. For writers, this could be a finished post or blog post. Or a chapter in a book.

Note that I haven’t said anything yet about the benefits someone might get from a result. This is because we can only create results. We cannot guarantee benefits because we cannot force people to use our results. We can only offer these results.

Others benefit from the results

Benefits are what people strength get result. We cannot pass it on to them. We can only offer the result for the benefit.

Let me offer an example of my writing. I publish a wide variety of blog posts and books on agility in all its forms. In Create your successful agile project, I proposed ways to rethink the agility approach of a given team. Why? Because my clients were struggling, not succeeding with their given framework.

I offered the result: a complete book.

The benefits to people who have read and acted on this writing? Substantial. Anyone who hasn’t read it? They didn’t accept my offer. I still created the result. They have not (yet) benefited from it.

This is why I hate the idea of ​​ROI for any type of product. We cannot make people accept what we offer. This means that we cannot predict the benefits that people will Gain. This is why I separate the benefits from the results.

Separate the benefits from the results

Since we cannot make our clients (or anyone else) choose to benefit from our work, we can only deliver usable results. And that means that while we can follow the results within our team, it is not worth the trouble to follow anything other than the results outside the team. That’s why i stalk cycle time to see how long it takes us to publish a result.

When I keep track of my word count (my output), I can see at a glance what I’m writing and how much. Should I write more? I can see this output and change what I’m doing. I don’t even need a real retrospective.

However, you can’t use my word count unless I finish something and post it. This is the result.

What if you change your behavior based on what I write? This is your advantage.

Here’s how it all works together. When we have a primary objective, we are all working towards this goal. While we could create the exits on the way to results, we see that the results help us move closer to this overarching goal. The shorten our feedback loops, the faster our users can realize the advantages.

When we separate the ideas for products, results, and benefits, we can see when to publish something faster. For me, this is the best result of all.

(Thank you, dear readers, for your questions.)



Source link

The case of missing JEPs – Java Code Geeks



The JDK-Proposition improvement (to give) is “for collecting, reviewing, sorting and recording the results of JDK improvement proposals and related efforts, such as process and infrastructure improvement”. JEP 0 is the “JEP Index” of “all JDK improvement proposals, known as JEP”. This article gives a brief overview of current JDK improvement proposals and discusses the surprisingly mysterious disappearance of two JEPs (JEP 187 and JEP 145).

Presentation of the JDK improvement proposal

EHDs in the JEP index with single digit numbers are “To treat“Type JEP and are currently:

EHDs in the JEP index with two-digit numbers are “InformationalThe type JEPs are currently:

The rest of the PECs listed (with three-digit numbers) in the JEP index are “Characteristic”Of JEP type and currently in number of JEP-101 (“Generalized target type inference”) via JEP 418 (“Internet-Address Resolution SPI”) (new candidate to give from this month [September 2021]).

Finally, there are JEPs which do not yet have a JEP number and which are indicated in the sub-title “PEC projects and submissions”JEPs in this state do not yet have their own JEP numbers, but are listed with a number in the JDK Bug System (JBS).

Originally, a JEP could exist in one of the different “JEP 1 Process states“:

  • Disorganized
  • Posted
  • Submitted
  • Candidate
  • Funded
  • Completed
  • Took of
  • Rejected
  • active

The explanation of the evolved potential JEP states is described in “Draft JEP: JEP 2.0, Draft 2. “This document has a”WorkflowWhich indicates that the “revised JEP process has the following states and transitions for the functionality and infrastructure JEPs” and displays a useful graph of these workflows. This document also describes the states of a Characteristic JEP:

  • Disorganized
  • Submitted
  • Candidate
  • Proposed to target
  • Target
  • Integrated
  • Complete
  • Closed / Delivered
  • Closed / Rejected
  • Proposed abandonment

Neither these documented states for Characteristic Neither JEP nor the additional text that describes these state transitions describes a JEP with a JEP number (rather than a JBS number) being completely removed and this is what causes the disappearance of JEP 187 (“Serialization 2.0”) and JEP 145 (“Cache Compiled Code”) unexpected.

The disappearance of the JEP 187 (“Serialization 2.0”)

JEP 187 is not listed in the JEP index, but we have the following evidence that it did exist at some point:

It is surprisingly difficult to find an explanation for what happened to JEP 187. Unlike others related to serialization JEP 154 (“Remove serialization”) which has been moved to the “Closed / Withdrawn” status, JEP 187 appears to have been completely removed rather than being present with a “Closed / Withdrawn” or “Closed / Rejected” status. Adding to the suspicious circumstances surrounding JEP 187, two queries on OpenJDK mailing lists regarding the status of this JEP (December 14, 2014 on core-libs-dev and September 6, 2021 on jdk-dev) have so far remained unanswered.

The reasons for the complete disappearance of JEP 187 can be assured of reading the “exploratory document” entitled “Towards better serialization»(June 2019). I have also already mentioned this in my post “JDK 11: the beginning of the end for Java serialization?

The disappearance of the JEP 145 (“Cache Compiled Code”)

Like JEP 187, JEP-145 is not listed in the JEP index, but there is evidence that it existed at one time:

As with the PEC 187, it is surprisingly difficult to find an explanation for the deletion of JEP 145. There is a StackOverflow question about its fate, but the the answers are mostly speculative (but possible).

most widespread speculation concerning the disappearance of JEP 145 Is that right is not necessary due to To In advance (AOT) compilation.

Conclusion

It seems that the two JEP 187 (“Serialization 2.0”) and JEP 145 (“Cache Compiled Code”) have both been made obsolete by the evolution of development, but it is surprising that they have completely disappeared from the JEP index rather than being left intact with a closed or removed condition.

Posted on Java Code Geeks courtesy of Dustin Marx, partner of our JCG program. See the original article here: The case of missing EHDs

The opinions expressed by contributors to Java Code Geeks are their own.



Source link

VMware revises Spring 6 and Spring Boot 3 for another decade



TO A spring of 2021, VMware revealed how Spring 6, slated for an October 2022 release, sets the framework up for another decade: deprecated features and third-party integrations. Spring Boot 3 will use Spring 6 but does not yet have a release date.

Demonstrating the magnitude of this overhaul, Spring Framework will not have a new major release this year – for the first time since 2010. However, an upcoming minor release will support Java 17, while the Spring Boot 2 release train. xy will still benefit from major releases in November 2021 and May 2022.

Recent developer surveys showed early adoption of new cloud native Java frameworks, such as Quarkus and Micronaut. These frameworks produce native applications with low memory usage and fast startup times. Spring 6 can be seen as VMware’s answer to these competing frameworks.

Spring 6 will have the usual maintenance release branches overlapping. But “Spring Framework 6 users are strongly encouraged to join our feature release stream, not expecting to stay on 6.0.x for long, but instead integrate 6.1, 6.2, etc. upgrades. in their usual usage pattern. “

The publication rate can also go from an annual cycle to a semi-annual cycle, as Spring Boot already does.

Requiring Java 17 is less aggressive than it looks today: by the time Spring 6 is available, Java 19 will be available. Jakarta EE 9 as a benchmark breaks backward compatibility with the new jakarta the namespace of the package, but allows Spring to keep pace: some Spring dependencies already support Jakarta EE 9 (like Tomcat 10 or Jetty 11), while others will not do so for another year next (like Hibernate ORM 6).

Spring 6 is expected to use the Java Platform Module System (JPMS) and allow developers to write Spring applications with JPMS.

Spring native today provides a native compilation of Spring applications with GraalVM. It will get even better with the release of version 0.11, scheduled for November 19, 2021, which will build on GraalVM 21.3 and Spring Boot 2.6. But Spring Boot 3 will seamlessly integrate native compilation with a starter configuration and build packs, replacing Spring Native in Spring 6.

Spring Boot 3 probably won’t offer a native build of all Spring initialization libraries right out of the door. And as is the case with other frameworks, there is currently no easy way to tell if a given Java library is working natively.

Spring Observability is a new project from Spring 6 and builds on lessons learned from Spring Cloud Detective. It logs metrics with Micrometer and offers tracing through providers such as OpenZipkin or OpenTelemetry. Unlike agent-based observability, Spring Observability will work in native-compiled Spring applications. And as part of the basic Spring framework, it will also deliver better information more effectively.

VMware has provided a few examples of obsolete features that it plans to remove, namely automatic wiring by name / type setters, certain FactoryBean arrangements, and “certain web-related options”. EJB and JAX-WS are third-party integrations that can be removed from Spring 6.

The first Spring 6 milestone is slated for late 2021 with a release candidate slated for July 2022. VMware invites the community to comment on its plans for Spring and Spring Boot.

InfoQ has caught up Kristen strem, Senior Communications Manager at VMware, who served as a liaison with the Spring team for questions about Spring Framework 6 and Spring Boot 3.

Version 6 prepares the spring frame for another decade. Why was this necessary and why now?

We see a major turning point for the Java ecosystem with the industry finally adopting next-generation Java and upgrading JDK 8 to legacy status for legacy systems. We expect JDK 17 to play a key role in this change, delivering an attractive new generation of long-term support with many accumulated improvements to the Java language, APIs, and virtual machines – more than JDK 11 LTS coming for. most to be adopted as a simple replacement for JDK 8 for reasons of support policy. In addition, there is not only JDK 17 as a next generation LTS, we also expect other key benefits in future versions of JDK features – for example, Project Loom – which we intend to take into account. optional load while maintaining a JDK 17 baseline. Eventually, we will continue to support JDK versions up to JDK 29 LTS in our 6.x range.

You compared the Spring Framework 6.x feature versions to the OpenJDK release model. Does this mean that once the new version 6.x is released you will stop producing versions for the previous version branch?

In terms of open source maintenance releases, we’ll keep the same model as in recent years, with overlapping maintenance branches as usual. The OpenJDK release model provides inspiration on how major features such as Project Loom support can be incorporated into feature releases 6.x rather than a major new generation of the framework. Also, we could deliver more frequent 6.x feature releases at the base framework level, joining not only OpenJDK but also Spring Boot to release feature iterations twice a year. Finally, Spring Framework 6 users are strongly encouraged to join our feature release feed, not expecting to stay on 6.0.x for long, but rather integrate 6.1, 6.2, etc. in their usual usage pattern.

You showed how Spring 6 developers can quickly reload and debug their running application in Kubernetes. Does this require the Tanzu app platform? If so, do you plan to make it available for every Kubernetes installation?

You don’t need any Tanzu-specific technology to support reloading and debugging apps in Kubernetes. Spring Boot comes with a devtools module since v1.3 which works great with Kubernetes. There are often a few decisions you will need to make about which tools to use and how to configure them. There is a good overview of this in the “Development of the inner loop with Spring Boot on Kubernetes“talk presented by Dave Syer at Spring One this year.

Of course, part of the value of the Tanzu app platform is that we can provide sensible integrations with the tools that work well with Spring, and we can automate a lot of things so that developers don’t do not have to worry about it. This is something that a lot of our customers want and you can see this automation in action over the course of this year. SpringOne Keynote.

Many frameworks, including Spring, use GraalVM for native compilation. How do you see Spring cooperating with these frameworks to improve native compilation for the entire Java community? For example, you mentioned a repository with GraalVM compatibility for Java libraries and benchmarks by the GraalVM team in the spring one.

From day one, the Spring team focused on helping the native Java ecosystem to mature through close collaboration with the GraalVM team. Native build tools, which allow you to build and test native executables via the Gradle and Maven plugins, is a great embodiment of this. It was initiated by the Spring and GraalVM teams, with a contribution to the JUnit project. Recently, the Micronaut team joined forces, and similar collaborations would be very valuable for the Java ecosystem.

Having a shared vision between the Spring and GraalVM teams on how to make the native Java ecosystem more sustainable and maintainable also enables deep collaboration on the native configuration project, which should provide guidelines and tools for sharing native configuration between various frameworks. Note that this type of effort is made possible by taking advantage of recent enhancements to native GraalVM support: native testing, native configuration enhancements, build-time integration, and more.

In the end, it is perfectly fine and expected that each executive provides their own “secret sauce”. But the native Java ecosystem cannot be sustainable without more shared support in Java tools and libraries, and we will continue to put a lot of effort into this type of collaboration with the wider ecosystem as part of our working on Spring Boot 3.x native first class support.

In April, Juergen Hoeller wrote that they are considering “introducing module information definitions into the code base” for Spring Framework 6. Does this mean that Spring Framework 6 will bring out-of-the-box support for writing Java applications with the Java platform module system?

Spring Framework 6 should indeed introduce module information descriptors for all modules of the base framework, with minimum dependencies required on the base Java modules, allowing the JDK to jlink tool to create custom runtime images for Spring configurations. There may be constraints with optional third-party libraries and some configuration strategies, so it is still unclear how popular such explicit use of modules will become among Spring users. So far, we haven’t received a lot of requests for this, despite a great general interest in recent versions of Java.

Videos and slides from the associated Spring One sessions are available: Spring 6, Spring native, and Spring observability.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window,document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘842388869148196’);
fbq(‘track’, ‘PageView’);



Source link

Surprisingly high cost of Java variables with names in all caps – Java code geeks



I have read hundreds of thousands if not millions of lines of Java code during my career because I have worked with the baselines of my projects; read the code of the open source libraries that I use; and read sample code in blogs, articles, and books. I have seen many different conventions and styles represented in the wide variety of Java code that I have read. However, in the vast majority of cases, Java developers have used uppercase identifiers for classes, enums, and other types and have used camelcase identifiers starting with a lowercase letter for local and other variable types (fields used as constants). and static fields sometimes had different naming conventions). Therefore, I was very surprised recently when I was reading some Java code (luckily not in the baseline of my current project) in which the author of the code had capitalized both the types and the identifiers of the local variables used. in this code. What surprised me most was how difficult this small change in approach made it difficult to read and analyze this otherwise simple code.

The following is a representative example of the Java code style that I was so surprised to come across:

Code list for DuplicateIdentifiersDemo.java

package dustin.examples.sharednames;

import java.util.Date;
import java.util.List;
import java.util.concurrent.TimeUnit;

import static java.lang.System.out;

/**
 * Demonstrates ability to name variable exactly the same as type,
 * despite this being a really, really, really bad idea.
 */
public class DuplicateIdentifiersDemo
{
    /** "Time now" at instantiation, measured in milliseconds. */
    private final static long timeNowMs = new Date().getTime();

    /** Five consecutive daily instances of {@link Date}. */
    private final static List<Date> Dates = List.of(
            new Date(timeNowMs - TimeUnit.DAYS.toMillis(1)),
            new Date(timeNowMs),
            new Date(timeNowMs + TimeUnit.DAYS.toMillis(1)),
            new Date(timeNowMs + TimeUnit.DAYS.toMillis(2)),
            new Date(timeNowMs + TimeUnit.DAYS.toMillis(3)));

    public static void main(final String[] arguments)
    {
        String String;
        final Date DateNow = new Date(timeNowMs);
        for (final Date Date : Dates)
        {
            if (Date.before(DateNow))
            {
                String = "past";
            }
            else if (Date.after(DateNow))
            {
                String = "future";
            }
            else
            {
                String = "present";
            }
            out.println("Date " + Date + " is the " + String + ".");
        }
    }
}

The code I encountered was only slightly more complicated than the one shown above, but it was more painful for me to mentally analyze it than it should have been because of the naming of the local variables with exactly the same names as their respective types. I realized that my years of reading and analyzing Java code led me to intuitively think initially of identifiers starting with a lowercase letter as variable names and identifiers starting with an uppercase letter as identifiers of type. While this type of instinctive assumption usually allows me to read the code faster and understand what it does, the assumption in this case bothered me as I had to make a special effort not to allow myself to think about some occurrences of “String” and “Date” as variable names; and occurrences as class names.

Although the code shown above is relatively straightforward code, the unusual naming convention for variable names makes it more difficult than it should be, especially for experienced Java developers who have learned to size quickly. code by taking advantage of well-known and generally accepted methods. coding conventions.

Java tutorials section on “Java language keywords“Provides the” list of keywords for the Java programming language “and states that” you cannot use any of the [the listed keywords] as identifiers in your programs. It also mentions that literals (but not keywords) true, false, and null also cannot be used as identifiers. Note that this list of keywords includes primitive types such as boolean and int, but does not include identifiers of reference types such as String, Boolean, and Integer.

Because very close to all the Java code I had read before, the first lowercase letters used for non-constants, non-static variable names, I was wondering if this convention is mentioned in the Java Tutorial section on naming variables. He is. This “VariablesThe section says, “Each programming language has its own set of rules and conventions for the types of names you are allowed to use, and the Java programming language is no different.” … If the name you choose consists of a single word, spell that word in lowercase letters. If it consists of more than one word, capitalize the first letter of each subsequent word. The names gearRatio and currentGear are excellent examples of this convention.

Conclusion

I have long believed in conventions that allow for more effective reading and mental analysis of code. Hitting this code with first capital letters for its camelcase variable name identifiers reminded me of this and led me to believe that the greater the general acceptance of a convention for a particular language, the greater departing from this convention is detrimental to legibility.



Source link

KivaKit Resources – Java Code Geeks



A resource is a flow of data that can be opened, read or written, and then closed. KivaKit provides a mini resource framework that allows easy and consistent access to many types of resources, and makes it easy to create new resources. Examples of KivaKit resources:

  • Files
  • Taken
  • Zip or JAR file entries
  • S3 objects
  • Package resources
  • HDFS files
  • HTTP responses
  • Input flow
  • Output stream

Examples of use cases

Some short examples of resource use cases:

Read the lines of a .csv file from a package, reporting the progress:

var resource = PackageResource.of(getClass(), "target-planets.csv");
try (var line : listenTo(new CsvReader(resource, schema, ',', reporter)).lines())
{
    [...]
}

Note that if this code is in a KivaKit Making up, then the first row can be reduced to:

var resource = packageResource("target-planets.csv");

Write a string to a file on S3:

var file = listenTo(File.parse("s3://mybucket/myobject.txt"));    
try (var out = file.writer().printWriter())
{
    out.println("Start Operation Impending Doom III in 10 seconds");
}

Safely extract an entry (ensuring no partial results) from a .zip file:

var file = listenTo(File.parse("/users/jonathan/input.zip"));
var folder = listenTo(Folder.parse("/users/jonathan"));
try (var zip = ZipArchive.open(file, reporter, READ))
{
    listenTo(zip.entry("data.txt")).safeCopyTo(folder, OVERWRITE);
}

In each case, the code is assumed to be present in a class implementing Repeater. The To listen() calls add this as the argument object’s listener, creating a chain of listeners. If anything noticeable happens in a Resource (for example, an attempt to open the resource when it does not exist), it will broadcast a message in the listen channel.

Resource and messaging issues

All Resources inherit and use the fatal() method for reporting unrecoverable opening, reading, and writing problems (other methods may have different semantics, such as those with a Boolean return value). The fatal() method in Streamerthe basic interface of Transmitter receiver does two things:

  1. Broadcast a Fatal Problem message to listeners
  2. Throw a IllegalStateException

This design decouples the broadcast of a FatalProblem message to listeners from the change in control flow that occurs as a result of throwing an exception.. The result is that, in most cases, exceptions can only be caught when an operation is recoverable, and the information in the exception can usually be ignored because it has already been broadcast (and probably logged, depending on the or terminal listeners).

For example, in this common (but seemingly unfortunate) idiom, error information is propagated to the caller with an exception that is caught, qualified with a cause, and logged:

class Launcher
{
    void doDangerousStuff()
    {
        [...]
        
        throw new DangerousStuffException("Whoops.");
    }
}
 
class AttackPlanet
{
    boolean prepareMissileLauncher()
    {
        try
        {
            doDangerousStuff();
            return true;
        }
        catch (DangerousStuffException e)
        {
            LOGGER.problem(e, "Unable to do dangerous stuff");
            return false;
        }
    }
}

A KivaKit alternative to this idiom is as follows:

class Launcher extends BaseRepeater
{
    void doDangerousStuff()
    {
        [...]
 
        fatal("Unable to do dangerous stuff: Whoops.");
    }
}

class AttackPlanet extends BaseRepeater
{
    boolean prepareMissileLauncher()
    {    
        listenTo(new Launcher()).doDangerousStuff();
        return true;
    }
}

After the Fatal Problem message in doStuffDangerous () is broadcast by the fatal() method, the control flow propagates separately via a IllegalStateException thrown by the same fatal() method to any caller on the call stack who might be able to substantially answer the problem (as opposed to just logging it). For more information, see KivaKit messaging.

Design

Okay, so how do KivaKit resources work?

The design of the KivaKit resource module is quite complex, so we will focus on the most important and high level aspects in this article.

A simplified UML diagram:

The Resource the class in this diagram is central. This class:

  • Has a ResourcePath (of ResourcePathed)
  • Has a size in bytes (of Size in bytes)
  • At a time of last modification (from Time-stamped modification)
  • Is a Readable resource

Since all resources are Readable resources, they can be opened with Readable.openForReading (), or read with the convenience methods in ResourceReader (which is accessible with ReadableResource.reader ()).

In addition, some resources are Writable resources. These can be opened with Write.openForWriting (), and written with methods in the convenience class Resource writer

The Resource the class itself can determine whether the resource exist() and if he isRemote (). Remote resources can be materialized to a temporary file on the local filesystem before reading them (using methods that are not in the UML diagram). Resources can also make a secure copy of their content to a destination To file Where Case with both safeCopyTo () methods. Secure copy involves 3 steps:

  1. Write to a temporary file
  2. Delete the destination file
  3. Rename the temporary file with the destination file name

Ultimately, BaseWritableResource extends BaseReadResource add the possibility of wipe off a resource, and save a Input flow to the resource, by reporting on the progress made.

To give an idea of ​​the resources provided by KivaKit, here is an overview of the hierarchy of readable and writable resource classes:

Implementing a resource

Now let’s take a quick look at one Resource Implementation. The implementation of a simple Readable resource only requires one onOpenForReading method and a sizeInBytes () method. A default value for everything else will be provided by BaseReadResource. The String resource class is a good example. It looks like this:

public class StringResource extends BaseReadableResource
{
    private final String value;

    public StringResource(final ResourcePath path, final String value)
    {
        super(path);
        this.value = value;
    }

    @Override
    public InputStream onOpenForReading()
    {
        return new StringInput(value);
    }

        @Override
    public Bytes sizeInBytes()
    {
        return Bytes.bytes(value.length());
    }
}

Conclusion

A few things we haven’t talked about:

  • All resources transparently implement different types of compression and decompression through the Codec interface
  • The Progress report interface and I / O progress
  • Generic resource identifiers and their resolution
  • The Service Provider Interface (SPI) for To file and Case

Coded

The resource module covered above is available in kivakit-resource in the KivaKit project.

<dependency>
    <groupId>com.telenav.kivakit</groupId>
    <artifactId>kivakit-resource</artifactId>
    <version>${kivakit.version}</version>
</dependency>

Posted on Java Code Geeks with the permission of Jonathan Locke, partner of our JCG program. See the original article here: KivaKit Resources

The opinions expressed by contributors to Java Code Geeks are their own.



Source link

How much faster is Java 17? – Java code geeks



Java 17 (released yesterday) comes with many new features and improvements. However, most of them require code changes to benefit from them. Except for performance. Just change your JDK installation and you get free performance boost. But how many? Is it worth it? Let’s find out by comparing the benchmarks of JDK 17, JDK 16 and JDK 11.

Reference methodology

  • Hardware: a stable machine without any other computationally demanding process running and with Intel® Xeon® Silver 4116 @ 2.1 GHz (12 cores total / 24 threads) and 128 GiB RAM memory, running RHEL 8 x86_64.
  • JDK (used both to compile and run):

    • JDK 11

      openjdk 11.0.12 2021-07-20
      OpenJDK Runtime Environment Temurin-11.0.12+7 (build 11.0.12+7)
      OpenJDK 64-Bit Server VM Temurin-11.0.12+7 (build 11.0.12+7, mixed mode)
    • JDK 16

      openjdk 16.0.2 2021-07-20
      OpenJDK Runtime Environment (build 16.0.2+7-67)
      OpenJDK 64-Bit Server VM (build 16.0.2+7-67, mixed mode, sharing)
    • JDK 17 (uploaded 2021-09-06)

      openjdk 17 2021-09-14
      OpenJDK Runtime Environment (build 17+35-2724)
      OpenJDK 64-Bit Server VM (build 17+35-2724, mixed mode, sharing)
  • JVM options: -Xmx3840M and explicitly specify a garbage collector:

    • -XX:+UseG1GC for G1GC, low latency garbage collection (default in all three JDKs).
    • -XX:+UseParallelGC for ParallelGC, the high-speed garbage collector.
  • Main class: org.optaplanner.examples.app.GeneralOptaPlannerBenchmarkApp of the module optaplanner-examples in OptaPlanner 8.10.0.Final.

    • Each run solves 11 planning issues with OptaPlanner, such as list of employees, school timetable and cloud optimization. Each planning problem lasts 5 minutes. Logging is set to INFO. The benchmark begins with a 30 second JVM warm-up which is ignored.
    • Solving a planning problem involves No me (except a few milliseconds at startup to load the entry). A single processor is completely saturated. He constantly creates many ephemeral items, and the GC collects them afterwards.
    • Benchmarks measure the number of scores calculated per second. More is better. Calculating a score for a proposed planning solution is not trivial: it involves many calculations, including checking for conflicts between each feature and every other feature.
  • Runs: Each combination of JDK and Garbage Collection is run 3 times sequentially. The results below are the average of these 3 races.

Results

Java 11 (LTS) and Java 16 vs. Java 17 (LTS)

Mean Cloud balancing Reassignment of machines Course planning Exam planning List of nurses Itinerant tournament
Database 200c 800c B1 B10 c7 c8 s2 s3 m1 m1 nl14
JDK 11 103,606 96,700 274 103 37,421 11 779 13,660 14 354 8,982 3,585 3,335 5.019
JDK 16 109,203 97,567 243,096 38,031 13,950 16 251 15,218 9,528 3,817 3,508 5 472
JDK 17 106 147 98,069 245,645 42,096 14,406 16 924 15,619 9 726 3 802 3,601 5 618
11 → 17 8.66% 2.45% 1.42% -10.38% 12.49% 22.30% 23.90% 8.81% 8.28% 6.05% 7.98% 11.95%
16 → 17 2.41% -2.80% 0.51% 1.05% 10.69% 3.27% 4.14% 2.63% 2.08% -0.39% 2.65% 2.67%
Mean Cloud balancing Reassignment of machines Course planning Exam planning List of nurses Itinerant tournament
Database 200c 800c B1 B10 c7 c8 s2 s3 m1 m1 nl14
JDK 11 128,553 121,974 292 761 48 339 13,397 15,540 16,392 9 887 4,409 4,148 6,097
JDK 16 128,723 123,314 281,882 45 622 16,243 18,528 17,742 10,744 4,608 4.348 6,578
JDK 17 130 215 124,498 262,753 45,058 16,479 18,904 18,023 10 845 4,658 4,430 6 641
11 → 17 6.54% 1.29% 2.07% -10.25% -6.79% 23.00% 21.64% 9.95% 9.68% 5.63% 6.80% 8.92%
16 → 17 0.37% 1.16% 0.96% -6.79% -1.24% 1.45% 2.03% 1.59% 0.94% 1.08% 1.89% 0.96%

To note

Looking at the raw data from the 3 individual scans (not shown here), the machine reassignment numbers (B1 and B10) fluctuate a lot between runs on the same JDK and GC. Often more than 10%. The other figures do not suffer from this unreliability.

It is probably best to ignore machine reassignment numbers. But to avoid data selection problems, these results and averages include them.

G1GC vs. ParallelGC on Java 17

Mean Cloud balancing Reassignment of machines Course planning Exam planning List of nurses. Itinerant tournament
Database 200c 800c B1 B10 c7 c8 s2 s3 m1 m1 nl14
G1GC 106 147 98,069 245,645 42,096 14,406 16 924 15,619 9 726 3 802 3,601 5 618
ParallelGC 130 215 124,498 262,753 45,058 16,479 18,904 18,023 10 845 4,658 4,430 6 641
G1 → ParallelGC 16.39% 22.67% 26.95% 6.96% 7.04% 14.39% 11.69% 15.39% 11.50% 22.50% 23.01% 18.20%

Abstract

On average, for OptaPlanner use cases, these benchmarks indicate that:

  • Java 17 is 8.66% faster than Java 11 and 2.41% faster than Java 16 for G1GC (default).
  • Java 17 is 6.54% faster than Java 11 and 0.37% faster than Java 16 for ParallelGC.
  • The parallel waste collector is 16.39% faster than the G1 waste collector.

No big surprises here: The latest JDK is faster, and the high-speed garbage collector is faster than the low-latency garbage collector.

Wait a minute here …

When we compared the JDK 15, we saw that Java 15 was 11.24% faster than Java 11. Now, the gain of Java 17 over Java 11 is less. Does this mean that Java 17 is slower than Java 15?

Well no. Java 17 is also faster than Java 15. These previous benchmarks were run on a different code base (OptaPlanner 7.44 instead of 8.10). Don’t compare apples and oranges.

Conclusion

In conclusion, the performance gained in the JDK17 version is well worth the upgrade – at least for OptaPlanner use case.

Additionally, the fastest garbage collector for these use cases is still ParallelGC, in the place of G1GC (the failure).

Posted on Java Code Geeks with the permission of Geoffrey De Smet, partner of our JCG program. See the original article here: How much faster is Java 17?

The opinions expressed by contributors to Java Code Geeks are their own.



Source link

6 new Java features you shouldn’t miss



Java quietly underwent one of the biggest development changes in 2018 with the adoption of a rate of new versions. This bold new plan has allowed Java developers to get a new feature release every six months.

It’s wonderful for keeping Java fresh and relevant, but it’s pretty easy to miss out on features as they’re introduced. This article summarizes and gives an overview of several useful new features.

The optional class

One of the most common errors is the null pointer exception. And while it may be familiar, it is a very wordy problem to be wary of. At least that was until Java 8 introduced (and Java 10 refines) the Optional to classify.

In essence, the Optional The class allows you to wrap a variable and then use the wrapper methods to deal with nullity more succinctly.

List 1 contains an example of a garden variety null pointer error, in which a class reference, foo, is zero and a method, foo.getName(), is accessible on it.

Listing 1. Null pointer without optional

public class MyClass {
    public static void main(String args[]) {
      InnerClass foo = null;
      System.out.println("foo = " + foo.getName());
    }
}
class InnerClass {
  String name = "";
  public String getName(){
      return this.name;
  }
}

Optional offers a number of approaches to deal with such situations, depending on your needs. It sports a isPresent() method you can use to do an if-check. It ends up being quite wordy, however. Corn Optional also has methods for functional manipulation. For example, List 2 shows how you can use ifPresent() – notice the difference of a letter with isPresent() – to execute the exit code only if there is a value present.

Copyright © 2021 IDG Communications, Inc.



Source link

KivaKit command line analysis – Java Code Geeks



The kivakit command line module provides the switch and argument parsing used by application-kivakit. Let’s see how it works. When an application starts (see KivaKit apps), the Application.run (String[] arguments) method uses the kivakit command line module to analyze the array of arguments passed to main (). Conceptually, this code looks like this:

public final void run(String[] arguments)
{
    onRunning();
    
    [...]

    commandLine = new CommandLineParser(this)
            .addSwitchParsers(switchParsers())
            .addArgumentParsers(argumentParsers())
            .parse(arguments);

In Course(), a AnalyzerLineOrder the instance is created and configured with the switch applications and analyzers argument, as returned by switchParsors () and argument analyzers () in our subclass of applications. Then when the parse (String[]) method is called, the command line is parsed. The resultant Command line the model is stored in Application, and is used later by our application to retrieve the values ​​of arguments and switches.

To analyse

An overview of the classes used in command line analysis can be seen in this abbreviated UML diagram:

The AnalyzerLineOrder class a references SwitchParserList And one ArgumentParserList. When he is parse (String[]) method is called, it uses these parsers to parse the switches and arguments of the given string array in a Switching list And one Argument list. Then it returns a Command line object populated with these values.

Note that all switches must be of the form -switch-name =[value]. If a string in the argument array is not of this form, it is considered an argument and not a switch.

Once a Command line has been successfully analyzed, it is available via Application.CommandLine (). The values ​​of specific arguments and switches can be retrieved via its to have() and argument() methods. The Application The class provides convenient methods for the call to command line() can often be omitted for brevity.

Example

In the example in KivaKit apps, the argument and switch parsers returned by the sample app were declared like this:

import static com.telenav.kivakit.commandline.SwitchParser.booleanSwitchParser;
import static com.telenav.kivakit.filesystem.File.fileArgumentParser;

[...]

private ArgumentParser<File> INPUT =
        fileArgumentParser("Input text file")
                .required()
                .build();

private SwitchParser<Boolean> SHOW_FILE_SIZE =
        booleanSwitchParser("show-file-size", "Show the file size in bytes")
                .optional()
                .defaultValue(false)
                .build();

The Application the subclass then provides these parsers to KivaKit like this:

@Override
protected List<ArgumentParser<?>> argumentParsers()
{
    return List.of(INPUT);
}

@Override
protected Set<SwitchParser<?>> switchParsers()
{
    return Set.of(SHOW_FILE_SIZE);
}

Then in onRun (), the input file is retrieved by calling the argument() method with the GRAB argument analyzer:

var input = argument(INPUT);

and the boolean switch SHOW_FILE_SIZE is accessible in the same way with to have():

if (get(SHOW_FILE_SIZE))
    {
        [...]
    }

This is all that is needed to do basic switch analysis in KivaKit.

But there are a few questions to ask yourself about how this all works. How are arguments and switches validated? How? ‘Or’ What KivaKit automatically provide command line help? And how do we define new SwitchParsersand ArgumentParsers?

Order line validation

The KivaKit validation mini-framework is used to validate switches and arguments. As shown in the diagram below, argument and switch validators are implemented in (private) classes ArgumentListValidator and SwitchListValidator, respectively. When arguments and switches are parsed by AnalyzerLineOrder these validators are used to ensure that the resulting parsed values ​​are valid.

For the list of switches, SwitchListValidator ensure that:

  1. No required switches are omitted
  2. No switch value is invalid (as determined by switch analyzer validation)
  3. No duplicate switch is present (this is not allowed)
  4. All switches present are recognized by a switch analyzer

For the list of arguments, ArgumentListValidator ensures that the number of arguments is acceptable. ArgumentParser.Builder can specify a quantifier for an argument parser by calling one of these methods:

public Builder<T> oneOrMore()
public Builder<T> optional()
public Builder<T> required()
public Builder<T> twoOrMore()
public Builder<T> zeroOrMore()

Argument parsers that accept more than one argument are allowed only at the end of the list of argument parsers returned by Application.argumentParsers (). For example, this code:

private static final ArgumentParser<Boolean> RECURSE =
        booleanArgumentParser("True to search recusively")
                .required()
                .build();

private static final ArgumentParser<Folder> ROOT_FOLDER =
        folderArgumentParser("Root folder(s) to search")
                .oneOrMore()
                .build();

[...]

@Override
protected List<ArgumentParser<?>> argumentParsers()
{
    return List.of(RECURSE, ROOT_FOLDER);
}

is valid and will parse command line arguments like this:

true /usr/bin /var /tmp

Here each root folder can be recovered with Application.argument (int index, ArgumentParser) passing in indices 1, 2 and 3.

However, it would be not be valid to return these two argument parsers in reverse order like this:

@Override
protected List<ArgumentParser<?>> argumentParsers()
{
    // NOT ALLOWED
    return List.of(ROOT_FOLDER, RECURSE);
}

since the ROOT_FOLDER parser must be the last in the list.

Command line help

Command line help for applications is provided automatically by KivaKit. For example, forgetting to pass the -deployment switch (more on deployments in a future article) to a server that is waiting for such a switch results in:

┏━━━━━━━━━━┫ COMMAND LINE ERROR(S) ┣━━━━━━━━━━┓
┋     ○ Required switch -deployment not found ┋
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
 
KivaKit 0.9.9-SNAPSHOT (beryllium gorilla)

Usage: DataServer 0.9.0-SNAPSHOT <switches> <arguments>

My cool data server.

Arguments:

  <none>

Switches:

  Required:

  -deployment=Deployment (required) : The deployment configuration to run

    ○ localpinot - Pinot on local host
    ○ development - Pinot on pinot-database.mypna.com
    ○ localtest - Test database on local host
  
  Optional:

  -port=Integer (optional, default: 8081) : The first port in the range of ports to be allocated
  -quiet=Boolean (optional, default: false) : Minimize output

Description comes from Application.description (), which we can replace in our application. Help for arguments and switches is generated from argument and switch analyzers based on their name, description, type, quantity, default, and list of valid values.

Creation of new switch and argument analyzers

Creating a new switch (or argument) analyzer is very easy if you have a KivaKit type converter for the switch. For example, in the above app, we created the SHOW_FILE_SIZE change analyzer by calling SwitchParser.booleanSwitchParser () to create a constructor. We then called optional() to make the switch optional and give it a default value of false before building the analyzer with to build():

import static com.telenav.kivakit.commandline.SwitchParser.booleanSwitchParser;

[...]

private SwitchParser<Boolean> SHOW_FILE_SIZE =
    booleanSwitchParser("show-file-size", "Show file size in bytes")
            .optional()
            .defaultValue(false)
            .build();

The SwitchParser.booleanSwitchParser static method creates a SwitchParser.Builder like that:

public static Builder<Boolean> booleanSwitchParser(String name, String description)
{
    return builder(Boolean.class)
            .name(name)
            .converter(new BooleanConverter(LOGGER))
            .description(description);
}

As we can see the Builder.converter (Converter) method is all that is needed to convert the switch from a string on the command line to one boolean value, as in:

-show-file-size=true

In general, if a String converter already exists for a type, it is trivial to create new switch analyzers for that type. As KivaKit has many handy string converters, KivaKit also provides many argument and switch parsers. Some of the types that support switch and / or argument parsers:

  • Boolean, Double, Integer, Long
  • Minimum Maximum
  • Bytes
  • To count
  • Local hour
  • Model
  • Percent
  • Version
  • Resource, ResourceList
  • File, FilePath, FileList
  • File, List of files
  • Host
  • Harbor

Coded

The complete code for the example shown here is available in the kivakit-examples deposit. The switch analysis classes are in:

<dependency>
    <groupId>com.telenav.kivakit</groupId>
    <artifactId>kivakit-commandline</artifactId>
    <version>${kivakit.version}</version>
</dependency>

but it is not normally necessary to include it directly since the application-kivakit module provides easier access to the same functionality:

<dependency>
    <groupId>com.telenav.kivakit</groupId>
    <artifactId>kivakit-application</artifactId>
    <version>${kivakit.version}</version>
</dependency>

Posted on Java Code Geeks with the permission of Jonathan Locke, partner of our JCG program. See the original article here: KivaKit Command Line Analysis

The opinions expressed by contributors to Java Code Geeks are their own.



Source link

Micronaut 3.0 brings significant changes adaptable for future development



Object Computing, Inc. To published Micronaut 3.0 with the removal of a default reactive streams implementation, a change in annotation inheritance and HTTP compile-time validation. This release was the culmination of work to resolve design flaws from the past in order to make the framework more intuitive and adaptable to future requirements.

Micronaut, the JVM-based full-stack development framework for building modular and easily testable microservices and serverless applications, defined as its core mission to reinvent application startup time and memory consumption. The “Micronaut way” is based on the fact that application startup time and memory consumption are unrelated to the size of your codebase, resulting in a “monumental jump in startup time, a extremely fast throughput and minimal memory footprint ”as indicated on their website.

Previous versions included RxJava2 as a transitive dependency and the default reactive streams used to implement many features in the framework. The release of RxJava3 was a good time to make a decision: upgrade or move to Project reactor. The Micronaut team chose the latter because of its functionality allowing it to maintain state within the responsive flow and wider adoption by the community. The team also recommends that projects currently using RxJava2 upgrade to Project Reactor. This will result in fewer classes on the execution classpath and less potential issues with context propagation and reactive type conversion.

The current version changes the way a developer will interact with annotations. Until now, annotations were inherited from interfaces or parent classes. However, from version 3.0, all annotations annotated with @Inherited will be inherited. Among other things, as a result of this change, any annotations related to bean scopes or around / intro tips will no longer be inherited.

Some names qualified as annotations have been changed, imposed by license changes or the movement of HTTP-related components. In the first case, the javax namespace licensing issues affected all annotations under javax.annotations. Namely, with this version, jakarta.annotation.PreDestroy and jakarta.annotation.PostConstruct are the recommended alternatives. The annotations of javax.inject which have been replaced by jakarta.inject annotations. Thus, for common uses of javax.inject.Provider, the recommended alternative is io.micronaut.context.BeanProvider. In the second case, the components related to HTTP at compile time have been moved to a new module, io.micronaut:micronaut-http-validation. This dependency must be added to the annotation processor classpath to continue using validated classes at compile time.

There are enhancements with Micronaut’s Inversion of Control (IoC) mechanism for improved granularity and control. Developers can now qualify an injection of a type by its generic arguments. A class that uses type arguments can be targeted by specifying these generics in the argument type.

@Inject
public Vehicle(Engine<V8> engine) {
   ...
}

Beans are now seen as a super type or interface rather than the type they are. This can be used to prevent direct searching for an implementing class and to force the interface to search for the bean.

@Bean(typed = Engine.class)
class V8Engine implements Engine {

}

Previously, lifecycle methods like @PostConstruct and @PreDestroy Aspect Oriented Programming (AOP) advice could not be applied to them. But now constructors and lifecycle methods can be caught for AOP advice on those methods.

Another novelty comes from the way in which the server filters are called: they are called only once for each request under all conditions. Exceptions are no longer propagated to filters. Instead, the resulting error response is passed through the reactive stream. Previously, server filters could be called multiple times in the event of an exception being thrown.

Other changes come in the GraalVM space where the addition of the @Introspected annotation also adds configuration of GraalVM to allow thoughtful use of the class. This behavior is no longer available because, in the vast majority of cases, it is not necessary. To restore it, add the @ReflectiveAccess annotation to the class. Another “magic” that has been happening under the hood so far has been the addition of the src/main/resources folder to the native image. Micronaut build plugin users will receive the same behavior while Maven users are now responsible for creating and maintaining the resource configuration.

As with most major releases, Micronaut 3.0 comes with breaking changes, but also promises an easy upgrade with Open Rewrite, a framework that would modify the source code to upgrade an application from Micronaut 2 to Micronaut 3. This can be accomplished with the Maven or Gradle plugin.

Micronaut was officially presented in March 2018 by Graeme Rocher, then Senior Software Engineer for Grails and Micronaut Product Manager at OCI, at the Greach Conference. Micronaut was then open-source at the end of May 2018. Three years later its first GA version, the third version is a consolidation version. The team put into practice all the learnings gathered during this period, improving the architecture and providing more flexibility for users in terms of which libraries can be used and what the memory footprint will look like. Also, it aligned its namespace name to avoid any licensing issues related to the javax namespace.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window,document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘842388869148196’);
fbq(‘track’, ‘PageView’);



Source link

Kubernetes pod as a bastion host – Java Code Geeks



In Cloud Native applications, private networks, databases and services are a reality.

An infrastructure can be completely private and only a limited number of entry points can be available.

Obviously, the smaller it is, the better.

There are still cases where no infrastructure has been put in place for private services and the means to connect to them. However, if there is access through Kubernetes, HAProxy can help.

HAProxy can accept a configuration file. It will be easy to download this file as a configmap and then mount the configmap on a Kubernetes pod. Then the HAProxy Kubernetes pod will be able to start using this configuration and thus establish a proxy connection.

Let’s start with the ha-proxy configuration. The target would be a MySQL database with a private IP.

apiVersion: v1
data:
  haproxy.cfg: |-
    global
    defaults
        timeout client          30s
        timeout server          30s
        timeout connect         30s

    frontend frontend
        bind    0.0.0.0:3306
        default_backend backend

    backend backend
        mode                    tcp
        server upstream 10.0.1.7:3306
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: mysql-haproxy-port-forward

Upstream we just add the ip and the database port, on the frontend we specify the local port and the address we will use.

By doing the above, we have a way to mount the config file on our Kubernetes pod.

Now let’s create the pod

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: mysql-forward-pod
  name: mysql-forward-pod
spec:
  containers:
    - command:
      - haproxy
      - -f
      - /usr/local/etc/haproxy/haproxy.cfg
      - -V
      image: haproxy:1.7-alpine
      name: mysql-forward-pod
      resources: {}
      volumeMounts:
        - mountPath: /usr/local/etc/haproxy/
          name: mysql-haproxy-port-forward
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
    - name: mysql-haproxy-port-forward
      configMap:
        name: mysql-haproxy-port-forward
status: {}

In the volume section, we define the configmap as a volume. In the container section, we mount the configmap on a path thus having access to the file.
We use a HAProxy image and provide the command to start HAProxy using the file we mounted before.

To test that this works, use a kubectl session that has port forwarding permissions and do

kubectl port-forward  mysql-forward-pod 3306:3306

You will be able to access mysql from your local host.

Posted on Java Code Geeks with the permission of Emmanuel Gkatziouras, partner of our JCG program. See the original article here: Pod Kubernetes as a bastion host

The opinions expressed by contributors to Java Code Geeks are their own.



Source link