Skip to content

Node 18 is going EOL #2401

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
cjihrig opened this issue Apr 29, 2025 · 9 comments
Open

Node 18 is going EOL #2401

cjihrig opened this issue Apr 29, 2025 · 9 comments

Comments

@cjihrig
Copy link
Contributor

cjihrig commented Apr 29, 2025

Node 18 is going EOL tomorrow (April 30). How do we want to handle that? Do we want to immediately drop support for it, or continue supporting it for some time? If we want to continue supporting it - for how long? If we want to drop it immediately, many people advocate for dropping support being a semver major change.

@rossanthony
Copy link
Contributor

As a consumer of this package I personally wouldn't expect support for node 18 beyond it's EOL date. Anyone still using node v18 in production after April 30, probably won't be giving priority to patching dependencies anyway. At least their priority should be to get onto the next LTS of node at that point.

@rossanthony
Copy link
Contributor

@cjihrig PS. what's the process for getting a new release cut? I'm waiting to test something out which needs this fix that was recently merged: #2367

@cjihrig
Copy link
Contributor Author

cjihrig commented Apr 29, 2025

As a consumer of this package I personally wouldn't expect support for node 18 beyond it's EOL date.

That's definitely the preferable attitude IMO 😄. But, the truth is that there are a lot of people that don't feel that way. There are even companies that offer commercial support for EOL versions. For the purposes of this library, the question is just whether or not to do a major version bump when dropping old versions.

what's the process for getting a new release cut?

I think @brendandburns is the only person that can cut a new release, but I could be mistaken.

@brendandburns
Copy link
Contributor

Actually anyone with github actions privileges on the repo (should be all maintainers?) can cut a release.

@rossanthony we typically cut a release for each new Kubernetes release. If there is an issue that is majorly impacting we can cut a patch release. How urgent is this fix for you?

@rossanthony
Copy link
Contributor

@brendandburns it's not super urgent, isn't impacting us in production. But we're trying to build a solution for sharing auth tokens via k8s secrets, leveraging the informer in this library. We previously had it in beta and noticed it was hammering the kube API, because of that socket timeout issue.

Looks like the latest kubernetes release is 1.33 (https://kubernetes.io/releases/) released: 2025-04-23, but the last release of this package was 1.1.2 3 weeks ago.

@brendandburns
Copy link
Contributor

@rossanthony yeah, we need to regenerate the client and cut a new release. Probably 2-3 weeks I would guess (maybe faster)

@cjihrig
Copy link
Contributor Author

cjihrig commented Apr 30, 2025

Actually anyone with github actions privileges on the repo (should be all maintainers?) can cut a release.

TIL. I'd be happy to help out with this. Is the release process documented anywhere?

@tonirvega
Copy link

tonirvega commented May 2, 2025

Hi @brendandburns ,

We're in the process of preparing a new release based on version 1.1.2, but we're currently blocked by the socket timeout issue on AKS clusters. Unfortunately, this fix was not included in the latest release: #2367.

This issue is becoming critical for us. Would it be possible to cut a new version early next week to help us move forward without further delays?

We’d greatly appreciate your support on this.

@tonirvega
Copy link

Hi @brendandburns ,

We're in the process of preparing a new release based on version 1.1.2, but we're currently blocked by the socket timeout issue on AKS clusters. Unfortunately, this fix was not included in the latest release: #2367.

This issue is becoming critical for us. Would it be possible to cut a new version early next week to help us move forward without further delays?

We’d greatly appreciate your support on this.

For more details:

i coded a basic example where we can reproduce the error:

// index.js
import * as k8s from '@kubernetes/client-node';

export async function getConnection() {
    try {
      const kc = new k8s.KubeConfig();
  
      kc.loadFromDefault();
  
      const opts  = {};
  
      await kc.applyToHTTPSOptions(opts);
  
      return { kc, opts };
    } catch (err) {
      throw err;
    }
}

export async function observe(
  plural,
  namespace,
) {

  const { kc } = await getConnection();

  try {

    const k8sApi = kc.makeApiClient(k8s.CustomObjectsApi);

    const apiGroup = 'test.dev';

    const apiVersion = 'v1';

    const apiPaths = `/apis/${apiGroup}/${apiVersion}/namespaces/${namespace}/${plural}`;

    const listFn = () => {
      return k8sApi.listNamespacedCustomObject({
        group: apiGroup,
        version: apiVersion,
        namespace,
        plural,
      });
    };

    const informer = k8s.makeInformer(kc, apiPaths, listFn);

    informer.on('add', (obj) => {console.log("on add")});

    informer.on('update', (obj) => {console.log("on update")});

    informer.on('delete',(obj) => {console.log(`on delete`)});

    informer.on(
      'error',
      (err) => {
        console.log("kind", plural);

        const time = new Date();
        console.log(time.toJSON())
        console.log('informer (on error): ERROR %O', err);
        process.exit(1);
      }
    );

    const time = new Date();
    console.log(time.toJSON());
    informer.start();

  } catch (err) {

    throw `Observing: ${plural}: ${err}`;
  }
}

for(const plural of [
    'testplural-1', 
    'testplural-2', 
    'testplural-3', 
    'testplural-4']) {
  observe(plural, 'firestartr-github')
}

output:

2025-05-02T09:41:39.990Z
2025-05-02T09:41:39.992Z
2025-05-02T09:41:39.992Z
2025-05-02T09:41:39.993Z
kind testplural-1
2025-05-02T09:45:59.692Z
informer (on error): ERROR Error: read ECONNRESET
    at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20) {
  errno: -104,
  code: 'ECONNRESET',
  syscall: 'read'
}

Runing on:
aks version: v1.30.11
nodejs version: 22

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants