Huge logs

Ask for help regarding any technical issue or report any bug or OS independent issues.
Post Reply
User avatar
chaslinux
Novice
Novice
Posts: 109
Joined: 08 Aug 2009, 01:57
Location: Kitchener, Ontario

Huge logs

Post by chaslinux »

No, I didn't saw any logs (*bad attempt at a joke*). I did have a look at my server logs and in the short months the auldsbel server has been up the logs look rather large:

char.log - 36MB
eathena-monitor.log - 112MB
login.log - 378MB

Is this normal? I'm thinking a couple of weeks worth of logs is handy to keep and I can always back up old logs. Does eathena have something to manage the log files or should this be done with a tool like logrotate?

Cheers!
User avatar
Kage
Manasource
Manasource
Posts: 929
Joined: 02 May 2009, 18:12

Re: Huge logs

Post by Kage »

Depends on what the logs say, I noticed it gets large when running the monitor, I no longer use the monitor for this very reason. Just start the server binaries separately.
<Kage_Jittai> ... are you saying I am elite :D
<thorbjorn> Yes. :P
User avatar
Freeyorp101
Archivist Prime
Archivist Prime
Posts: 765
Joined: 04 Nov 2008, 09:17
Location: New Zealand

Re: Huge logs

Post by Freeyorp101 »

It's not advisable to use the monitor since the changes to the char-server writes. I guess the log file sizes provided are normal for a small server. tmwAthena has no built in mechanism for archiving or deleting logs.


---Freeyorp
(09:58:17) < tux9th> Freeyorp: your sig on the forums is kind of outdated
Frost
TMW Adviser
TMW Adviser
Posts: 851
Joined: 09 Sep 2010, 06:20
Location: California, USA

Re: Huge logs

Post by Frost »

In the UNIX/Linux world, I generally suggest rotating both logs and backups, and logrotate is a great way to do that.

400MB is annoying to parse, but is not a storage problem per se. When something starts looping and spitting errors, you could be dealing with a 40GB log file and that's just not fun. Rotating and compressing (gzip if CPU load is a consideration) logs might save you more than you think.
You earn respect by how you live, not by what you demand.
-unknown
User avatar
Kage
Manasource
Manasource
Posts: 929
Joined: 02 May 2009, 18:12

Re: Huge logs

Post by Kage »

Frost wrote:In the UNIX/Linux world, I generally suggest rotating both logs and backups, and logrotate is a great way to do that.

400MB is annoying to parse, but is not a storage problem per se. When something starts looping and spitting errors, you could be dealing with a 40GB log file and that's just not fun. Rotating and compressing (gzip if CPU load is a consideration) logs might save you more than you think.
a 400MB log file is indeed a big deal. Especially when you get one from only running the server for a few hours.
<Kage_Jittai> ... are you saying I am elite :D
<thorbjorn> Yes. :P
User avatar
o11c
Grand Knight
Grand Knight
Posts: 2262
Joined: 20 Feb 2011, 21:09
Location: ^ ^

Re: Huge logs

Post by o11c »

I am aware of this problem, although during this stage of my rewrite I am more concerned about making logging consistent (i.e. writing the same thing to the console and to the log, sign errors) than reducing it. But it is an objective, ultimately.

Remember that, as long as a process keeps running with an open file descriptor, renaming the file will not actually rotate it.

Note also that the map server's logs backup and gzip themselves.

...
Hm, maybe my api should be like:

FILE * get_log (const char *basename);

to prevent automatic splitting from
and let the implementation do backups?

Now the only question is how often to do backups.
Counting by time would be problematic because different servers have different usage scales.
Counting bytes or lines would not be the easiest thing to do, but counting the number of calls to foo_log(format, ...) would be pretty close to line count.
Former programmer for the TMWA server.
Frost
TMW Adviser
TMW Adviser
Posts: 851
Joined: 09 Sep 2010, 06:20
Location: California, USA

Re: Huge logs

Post by Frost »

Kage wrote:
Frost wrote: 400MB is annoying to parse, but is not a storage problem per se....you could be dealing with a 40GB log file and that's just not fun.
a 400MB log file is indeed a big deal. Especially when you get one from only running the server for a few hours.
Even ten years ago, 400MB was not a "big deal" to store on disk. The OP mentioned 400MB over months, not hours.
If you wish to discuss log solutions for high-volume servers, let's chat privately or in a different thread. I've set up enterprise loghost and SEM solutions and would be happy to talk further. Syslog geeks are a (thankfully) rare breed. :)
o11c wrote:Remember that, as long as a process keeps running with an open file descriptor, renaming the file will not actually rotate it.
Syslog has usually addressed that problem. That's just one benefit of logging to syslog rather than directly to the filesystem.
You earn respect by how you live, not by what you demand.
-unknown
User avatar
Kage
Manasource
Manasource
Posts: 929
Joined: 02 May 2009, 18:12

Re: Huge logs

Post by Kage »

Frost wrote: Even ten years ago, 400MB was not a "big deal" to store on disk. The OP mentioned 400MB over months, not hours.
If you wish to discuss log solutions for high-volume servers, let's chat privately or in a different thread. I've set up enterprise loghost and SEM solutions and would be happy to talk further. Syslog geeks are a (thankfully) rare breed. :)
It is a problem when my VPS only has a 15GB HDD

And on my server, it would actually build up to over 400MB in a few minutes... there is a issue with logging. Actually, had to stop running eA on my slice because it caused it to go out of hard disk space. I just recently started it up again to begin testing some new content.
<Kage_Jittai> ... are you saying I am elite :D
<thorbjorn> Yes. :P
User avatar
bcs86
Warrior
Warrior
Posts: 259
Joined: 27 Feb 2009, 17:14
Contact:

Re: Huge logs

Post by bcs86 »

If you aren't going to read the log files:

Code: Select all

ln -s /dev/null /path/to/logfile.log
for each one.

If you might read the logs, but not old entries:

Code: Select all

cat /dev/null > /path/to/logfile.log
in a script that runs every few days.
(either a cron job or sleep($MANYSECONDS)...)
It just empties the file when executed.

blah.
User avatar
o11c
Grand Knight
Grand Knight
Posts: 2262
Joined: 20 Feb 2011, 21:09
Location: ^ ^

Re: Huge logs

Post by o11c »

If you aren't going to read the logs, the server behaves fine if it can't create the logs (e.g. if the directory does not exist, or if you chmod -w individual files)
Former programmer for the TMWA server.
Post Reply