Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

should remove targetFile when targetFile is existing? #62

Open
zoluzo opened this issue Jun 17, 2020 · 2 comments
Open

should remove targetFile when targetFile is existing? #62

zoluzo opened this issue Jun 17, 2020 · 2 comments

Comments

@zoluzo
Copy link

zoluzo commented Jun 17, 2020

when the current file should roll,the operation is
fs.move(sourceFilePath, targetFilePath, { overwrite: true });
this will delete target file if it exist.
But, in the mutli process app,when set disableClustering: true, all process will do the same operation. The first write log process will rename log.log to log.log.2020-01-01-01 and renew log.log. The next write log process will rename log.log that is renewed by first process to log.log.2020-01-01-01 too and this cause log loss.

So, when target file exists, should skip remove operation?
try { if (fs.existsSync(targetFilePath)) { return ; } await fs.move(sourceFilePath, targetFilePath, { overwrite: true }); } catch (e) { }

@lamweili
Copy link
Contributor

This is tricky. While your proposed solution works to not have log loss, it results in another issue whereby the existing logs will continue growing (without rolling) till it runs out of disk space. That is another catastrophic event, as one would configure rolling to cater for a catered cyclic disk space usage.

@lamweili
Copy link
Contributor

lamweili commented May 12, 2022

Originally posted by @nomiddlename in slack
Multiple processes writing to the same log file, and also attempting to rotate that log file, is going to cause weird behaviour. They have no way of communicating with each other, so each thinks it is the only one rotating the files. It would be better to maybe split the log files by process id or something?

Perhaps, if have to disableClustering=true, maybe the filename can include the process.pid?

index.js

const http = require('http');
const { fork } = require('child_process');
const path = require('path');

const taskPath = path.resolve(__dirname, './task.js');

const pidCount = 4;
const pids = {};
for (let i = 0; i < pidCount; i++) {
  (function () {
    let pro = fork(taskPath);
    pids[pro.pid] = pro.pid;

    pro.on('error', (err) => {
      pids[pro.pid] = null;
    });
    pro.on('exit', () => {
      pids[pro.pid] = null;
    });
  })();
}

const server = http.createServer((req, res) => {
  res.writeHead(200, { 'Content-Type': 'application/json' });
  const resp = JSON.stringify(pids);
  res.end(resp);
});

process.stdout.write('[Server start on port 8000.]\n');
server.listen(8000);

task.js

const logger = require('./logger');

const pid = process.pid;
setInterval(() => {  
  logger.info(`I am ${pid}`);
}, 500);

process.stdout.write(`Subprocess ${pid} start\n`);

logger.js

const log4js = require('log4js');
log4js.configure({
  appenders: {
    out: { type: 'stdout' },
    app: { 
      type: 'dateFile', 
      filename: `logs/${process.pid}/app.log`, // logs/1234/app.log or logs/1234/app.2022-05-12-23:42:26.log
      pattern: 'yyyy-MM-dd-hh:mm:ss',
      alwaysIncludePattern: false,
      keepFileExt: true,
      numBackups: 5
    }
  },
  categories: {
    default: { appenders: [ 'out', 'app' ], level: 'trace' }
  },
  disableClustering: true
});

const infoLogger = log4js.getLogger();
function info(...params) {
  infoLogger.info(...params);
};

module.exports = {
  info
};

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants