Quantcast
Channel: Hortonworks » All Replies
Viewing all articles
Browse latest Browse all 3435

YARN containers being killed – memory configuration ignored (?)

$
0
0

I’m trying to run a Spark job on YARN (on HDP 2.3), and I’m getting messages like this in the YARN logs:

2015-11-13 17:23:22,426 WARN monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(508)) - Container [pid=13043,containerID=container_1447427019168_0006_01_000042] is running beyond physical memory limits. Current usage: 2.1 GB of 2 GB physical memory used; 5.1 GB of 4.2 GB virtual memory used. Killing container.

The first time this happened was reasonable: I was using the configuration that Ambari created for me, and it was setting both yarn.nodemanager.resource.memory-mb and yarn.scheduler.maximum-allocation-mb to 4096. Hence the container gets killed when it uses more than 4GB (plus a bit of leeway).

(Most of this memory use is down to either the Java container process itself, and an external program being spawned from my code.)

The nodes have 8GB each, so I should be able to increase this limit. However, if I set these two settings to 6144 (via Ambari) I find it makes no difference. The message in the logs continues to indicate that 4.2 GB is the limit. The same thing happens if I reduce the settings to 3072. It’s as if the settings are just being ignored, or that I’m changing the completely the wrong thing.

It occurred to me that changes made in Ambari weren’t being reflected in the config files on the nodes, but I’ve checked and it’s not that.

It also can’t be that the changes aren’t being picked up by YARN: as well as the component restarts that Ambari asks for, I’ve been rebooting the nodes themselves.

I’m stumped.


Viewing all articles
Browse latest Browse all 3435

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>