spark 问题求教

80 天前
 wdg8106
spark 启动后 提交任务失败,问下各位大佬是什么问题呢。

脚本代码:
from pyspark import SparkContext,SparkConf
host = 'discomaster'
sc = SparkContext.getOrCreate(conf=SparkConf().setMaster('spark://%s:7077' %host).setAppName('lc_test'))

rdd = sc.parallelize(range(1,11),4)
res = rdd.take(100)
print res

脚本在 webserver2 上执行,discomaster 是 master 节点。

集群信息:
URL: spark://discomaster:7077
Alive Workers: 27
Cores in use: 27 Total, 0 Used
Memory in use: 54.0 GiB Total, 0.0 B Used
Resources in use:
Applications: 0 Running, 42 Completed
Drivers: 0 Running, 0 Completed
Status: ALIVE

脚本执行信息:
24/02/18 09:49:26 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
24/02/18 09:49:41 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
24/02/18 09:49:56 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
24/02/18 09:50:11 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
24/02/18 09:50:26 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

excutor 的 stderr 信息:
Spark Executor Command: "/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java" "-cp" "/share/software/spark/conf/:/share/software/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=19038" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@webserver2:19038" "--executor-id" "372" "--hostname" "172.16.22.108" "--cores" "1" "--app-id" "app-20240218094855-0041" "--worker-url" "spark://Worker@172.16.22.108:36943"
========================================

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
24/02/18 09:49:30 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 123172@cluster208
24/02/18 09:49:30 INFO SignalUtils: Registered signal handler for TERM
24/02/18 09:49:30 INFO SignalUtils: Registered signal handler for HUP
24/02/18 09:49:30 INFO SignalUtils: Registered signal handler for INT
24/02/18 09:49:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
24/02/18 09:49:30 INFO SecurityManager: Changing view acls to: root,lichong
24/02/18 09:49:30 INFO SecurityManager: Changing modify acls to: root,lichong
24/02/18 09:49:30 INFO SecurityManager: Changing view acls groups to:
24/02/18 09:49:30 INFO SecurityManager: Changing modify acls groups to:
24/02/18 09:49:30 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, lichong); groups with view permissions: Set(); users with modify permissions: Set(root, lichong); groups with modify permissions: Set()
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:61)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:283)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:272)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:302)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$3(CoarseGrainedExecutorBackend.scala:303)
at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.java:23)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877)
at scala.collection.immutable.Range.foreach(Range.scala:158)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$1(CoarseGrainedExecutorBackend.scala:301)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:62)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
... 4 more
Caused by: java.io.IOException: Failed to connect to webserver2/172.16.16.175:19038
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:253)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:195)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: webserver2/172.16.16.175:19038
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
1158 次点击
所在节点    Python
6 条回复
llllllllllllllii
80 天前
网络问题,没分配到资源,确认下 url 和 webserver2 状态之类的
wdg8106
80 天前
@llllllllllllllii 确实直接在 master 节点上执行脚本 能正常运行,但 webserver2 跟 master 之间能正常通信,甚至免密登陆都配了,会是啥网络问题呢
AsAsSaSa
80 天前
没了解过 spark ,但看报错内容似乎很清楚地指出,executor 无法访问到 webserver2 (的 19038 端口),Connection refused 是指显式拒绝,像是防火墙问题?
wdg8106
80 天前
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
wdg8106
80 天前
防火墙规则看着挺宽松的
wdg8106
79 天前
找到原因了,感谢各位大佬。
我是用 Standalone 模式启动的 spark 集群,deploy-mode 有 client 和 cluster 两种模式。
cluster 模式是客户端提交之后就完事了, 而 client 模式 driver 程序会运行在客户端程序中,所以才会有集群中的节点来连客户端的情况。
我就是用 client 模式启动的,但是由于 没有指定 bindAddress 导致默认绑定在回环地址上,导致其他机器没法访问,所以报错了。
手动设置下 bindAddress 就可以了

这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。

https://www.v2ex.com/t/1016114

V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。

V2EX is a community of developers, designers and creative people.

© 2021 V2EX