Project

General

Profile

Actions

Bug #4762

closed

frequent "shutting down Core ... Core terminated." in /var/log/foreman/production.log

Added by Jan Hutař about 10 years ago. Updated almost 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Orchestration
Target version:
Difficulty:
Triaged:
Yes
Fixed in Releases:
Found in Releases:

Description

I have been tailing logs for about 2 hours and seen this to appear 3 times:

==> /var/log/foreman/cron.log <==

==> /var/log/foreman/production.log <==
Connecting to database specified by database.yml
Clean start.
shutting down Core ...
... Core terminated.
Processing by HostsController#externalNodes as YML
  Parameters: {"name"=>"<fqdn>"}
  Rendered text template (0.0ms)
Completed 200 OK in 66ms (Views: 0.8ms | ActiveRecord: 9.1ms)
Processing by Api::V2::HostsController#facts as JSON
  Parameters: {"facts"=>"[FILTERED]", "certname"=>"<fqdn>", "name"=>"<fqdn>", "apiv"=>"v2", "host"=>{"facts"=>"[FILTERED]", "certname"=>"<fqdn>", "name"=>"<fqdn>"}}
Import facts for '<fqdn>' completed. Added: 0, Updated: 3, Deleted 0 facts
Completed 201 Created in 336ms (Views: 6.2ms | ActiveRecord: 0.0ms)
Processing by HostsController#externalNodes as YML
  Parameters: {"name"=>"<fqdn>"}
  Rendered text template (0.0ms)
Completed 200 OK in 39ms (Views: 0.8ms | ActiveRecord: 5.5ms)
Processing by Api::V2::ReportsController#create as JSON
  Parameters: {"report"=>"[FILTERED]", "apiv"=>"v2"}
processing report for <fqdn>
Imported report for <fqdn> in 0.02 seconds
Completed 201 Created in 32ms (Views: 2.5ms | ActiveRecord: 0.0ms)

After discussion with inecas, i have been told to file this as that is strange:

<inecas_> jhutar: that shows there when stopping/restarting the service
[...]
<inecas> jhutar: that is kind of strange: could you file a bug here http://projects.theforeman.org/projects/katello/issues/new, it might or might not be an issue, but I would like to look into more details

This happened on:

# rpm -qa | grep -e katello -e foreman | sort
foreman-1.5.0.8-1.el6sat.noarch
foreman-compute-1.5.0.8-1.el6sat.noarch
foreman-libvirt-1.5.0.8-1.el6sat.noarch
foreman-ovirt-1.5.0.8-1.el6sat.noarch
foreman-postgresql-1.5.0.8-1.el6sat.noarch
foreman-proxy-1.5.2-1.el6sat.noarch
foreman-selinux-1.5.0-0.develop.el6sat.noarch
foreman-vmware-1.5.0.8-1.el6sat.noarch
<fqdn>-foreman-client-1.0-1.noarch
<fqdn>-foreman-proxy-1.0-1.noarch
katello-1.5.0-17.el6sat.noarch
katello-apache-1.0-1.noarch
katello-ca-1.0-1.noarch
katello-certs-tools-1.5.4-1.el6sat.noarch
katello-installer-0.0.28-1.el6sat.noarch
pulp-katello-plugins-0.2-1.el6sat.noarch
ruby193-rubygem-foreman-tasks-0.4.0-4.el6sat.noarch
ruby193-rubygem-katello-1.5.0-21.el6sat.noarch
rubygem-foreman_api-0.1.11-3.el6sat.noarch
rubygem-hammer_cli_foreman-0.0.18-3.el6sat.noarch
rubygem-hammer_cli_foreman_tasks-0.0.1-4.el6sat.noarch
rubygem-hammer_cli_katello-0.0.3-3.el6sat.noarch
rubygem-katello_api-0.0.8-2.el6sat.noarch
Actions #1

Updated by Jan Hutař about 10 years ago

Some HW info as requested:

# tail -n 26 /proc/cpuinfo 
processor    : 23
vendor_id    : AuthenticAMD
cpu family    : 16
model        : 9
model name    : AMD Opteron(tm) Processor 6174
stepping    : 1
cpu MHz        : 2200.133
cache size    : 512 KB
physical id    : 1
siblings    : 12
core id        : 5
cpu cores    : 12
apicid        : 43
initial apicid    : 27
fpu        : yes
fpu_exception    : yes
cpuid level    : 5
wp        : yes
flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid amd_dcm pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt nodeid_msr npt lbrv svm_lock nrip_save pausefilter
bogomips    : 4400.03
TLB size    : 1024 4K pages
clflush size    : 64
cache_alignment    : 64
address sizes    : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
# free
             total       used       free     shared    buffers     cached
Mem:      16297264   15651036     646228          0    2648496    7456416
-/+ buffers/cache:    5546124   10751140
Swap:      8216568      44964    8171604
# df -h
Filesystem                           Size  Used Avail Use% Mounted on
/dev/mapper/something-lv_root        207G   92G  105G  47% /
tmpfs                                7.8G     0  7.8G   0% /dev/shm
/dev/sda1                            485M   40M  420M   9% /boot
/dev/mapper/something-lv_home        5.0G  161M  4.6G   4% /home
Actions #2

Updated by Mike McCune about 10 years ago

  • Triaged set to Yes
Actions #3

Updated by Mike McCune about 10 years ago

  • Triaged deleted (Yes)
Actions #4

Updated by Mike McCune about 10 years ago

  • Category set to Orchestration
  • Assignee set to Ivan Necas
  • Triaged set to Yes

FYI, I see this a lot as well on various VMs running Sat6:

shutting down Core ...
... Core terminated.
shutting down Core ...
... Core terminated.
shutting down Core ...
..

Actions #5

Updated by Ivan Necas about 10 years ago

  • Status changed from New to Assigned

The "problem" is there is a cron job for runing rake task scheduled for running every 30 minutes and for now, the rake task always run their own dynflow executor (done primarily for db:migrate and db:seed tasks, as the proper executor usually doesn't run while running these particular tasks) I will update the foreman-tasks to run their own executor only for specific rake tasks.

Actions #6

Updated by Ivan Necas about 10 years ago

  • Status changed from Assigned to Closed
  • % Done changed from 0 to 100

Applied in changeset katello|commit:901b303057a13fa177b588f47e6b9010d82bc61e.

Actions #7

Updated by Eric Helms over 9 years ago

  • translation missing: en.field_release set to 13
Actions

Also available in: Atom PDF