2016-03-07 20 views
7

उछाल वाले निष्पादक हम एक Kinesis स्ट्रीम से डेटा उपभोग करने वाले एडब्ल्यूएस ईएमआर 4.3.x पर चल रहे स्पार्क स्ट्रीमिंग 1.6.0 का उपयोग कर रहे हैं। स्पार्क 1.3.1 में ठीक से काम करने के लिए प्रयुक्त माइग्रेशन के बाद, हम लंबे समय तक लोड का सामना करने में असमर्थ हैं। गैंग्लिया से पता चलता है कि क्लस्टर की प्रयुक्त स्मृति तब तक बढ़ती जा रही है जब तक कि कुछ सीमा जीसी के बिना पहुंच न जाए। उसके बाद कई वास्तव में लंबे माइक्रो-बैच हैं (कई सेकंड के बजाय दर्जनों मिनटों के मामले में)। और फिर स्पार्क निष्पादकों (ओवर और ओवर) को मारने और उछालने लगते हैं,स्पार्क स्ट्रीमिंग 1.6.0 -

असल में क्लस्टर अनुपयोगी हो जाता है। समस्या लोड के समय, समय के बाद समय के साथ पुन: उत्पन्न होती है। अधिकारियों को मारने के बिना स्पार्क जीसी में विफल होने का कारण क्या हो सकता है? हम क्लस्टर को सप्ताहों के लिए कैसे चला सकते हैं (वर्तमान में इसे घंटों तक नहीं चलाया जा सकता है)

कोई इनपुट स्वागत है।

हम जब एक नौकरी को परिभाषित निम्नलिखित परिभाषा का उपयोग कर रहे हैं:

sparkConf.set("spark.shuffle.consolidateFiles", "true"); 
sparkConf.set("spark.storage.memoryFraction", "0.5"); 
sparkConf.set("spark.streaming.backpressure.enabled", "true"); 
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer"); 

की

KinesisUtils.createStream(streamingContext, appName, 
         kinesisStreamName, kinesisEndpoint, awsRegionName, initialPositionInStream, checkpointInterval, 
         StorageLevel.MEMORY_AND_DISK_2()); 

मैं परीक्षण के लिए एक नंगे कंकाल को हमारे आवेदन छीन लिया है एक संघ कर रहा। बाइट स्ट्रीम से स्ट्रिंग स्ट्रीम तक नक्शा हटाया गया, फिर ऑब्जेक्ट्स में कनवर्ट करें, अप्रासंगिक घटनाओं को फ़िल्टर करें, फिर एस 3 पर बने रहें और स्टोर करें।

eventStream = eventStream.persist (StorageLevel.MEMORY_AND_DISK_SER_2());

eventStream = eventStream.repartition (config.getSparkOutputPartitions()); eventStream.foreachRDD (नया RddByPartitionSaverFunction <> (नया आउटपुट ToS3 फ़ंक्शन());

स्पार्क काम निम्नलिखित विन्यास (डिफ़ॉल्ट स्पार्क config से स्मृति आकार संशोधन के साथ की नकल की) के साथ प्रस्तुत किया जाता है:

spark.executor.extraJavaOptions -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p' -XX:PermSize=256M -XX:MaxPermSize=256M 
spark.driver.extraJavaOptions -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p' -XX:PermSize=512M -XX:MaxPermSize=512M 

जोड़ा जा रहा है अपवाद नहीं। 1-सेंट क्लस्टर

16/03/06 13:54:52 WARN BlockManagerMaster: Failed to remove broadcast 1327 with removeFromMaster = true - Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout 
org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout 
     at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48) 
     at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63) 
     at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
     at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) 
     at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) 
     at scala.util.Try$.apply(Try.scala:161) 
     at scala.util.Failure.recover(Try.scala:185) 
     at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324) 
     at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324) 
     at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
     at org.spark-project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) 
     at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133) 
     at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
     at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
     at scala.concurrent.Promise$class.complete(Promise.scala:55) 
     at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) 
     at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) 
     at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) 
     at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
     at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.processBatch$1(Future.scala:643) 
     at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply$mcV$sp(Future.scala:658) 
     at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635) 
     at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635) 
     at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) 
     at scala.concurrent.Future$InternalCallbackExecutor$Batch.run(Future.scala:634) 
     at scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694) 
     at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:685) 
     at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
     at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
     at scala.concurrent.Promise$class.tryFailure(Promise.scala:112) 
     at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153) 
     at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:241) 
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
     at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) 
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
     at java.lang.Thread.run(Thread.java:745) 
Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 120 seconds 
     at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:242) 
     ... 7 more 
16/03/06 13:54:52 ERROR ContextCleaner: Error cleaning broadcast 1327 
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout 
     at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48) 
     at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63) 
     at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
     at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) 
     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76) 
     at org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:136) 
     at org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228) 
     at org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45) 
     at org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:67) 
     at org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:233) 
     at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:189) 
     at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:180) 
     at scala.Option.foreach(Option.scala:236) 
     at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:180) 
     at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1180) 
     at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:173) 
     at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:68) 
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [120 seconds] 
     at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) 
     at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) 
     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) 
     at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) 
     at scala.concurrent.Await$.result(package.scala:107) 
     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) 
     ... 12 more 

16/03/06 13:55:04 ERROR YarnClusterScheduler: Lost executor 6 on ip-***-194.ec2.internal: Container killed by YARN for exceeding memory limits. 11.3 GB of 11.3 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead. 
1 
16/03/06 13:55:10 ERROR YarnClusterScheduler: Lost executor 1 on ip-***-193.ec2.internal: Container killed by YARN for exceeding memory limits. 11.3 GB of 11.3 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead. 

दूसरा क्लस्टर प्रयास:

16/03/07 14:24:38 ERROR server.TransportChannelHandler: Connection to ip-***-22.ec2.internal/N.N.N.22:40791 has been quiet for 120000 ms while there are outstanding requests. Assuming connection is dead; please adjust spark.network.timeout if this is wrong. 
16/03/07 14:24:38 ERROR client.TransportResponseHandler: Still have 12 requests outstanding when connection from ip-***-22.ec2.internal/N.N.N.22:40791 is closed 
16/03/07 14:24:38 ERROR netty.NettyBlockTransferService: Error while uploading block input-47-1457357970366 
java.io.IOException: Connection from ip-***-22.ec2.internal/N.N.N.22:40791 closed 
     at org.apache.spark.network.client.TransportResponseHandler.channelUnregistered(TransportResponseHandler.java:124) 
     at org.apache.spark.network.server.TransportChannelHandler.channelUnregistered(TransportChannelHandler.java:94) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:739) 
     at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:659) 
     at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) 
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) 
     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) 
     at java.lang.Thread.run(Thread.java:745) 
16/03/07 14:24:38 ERROR netty.NettyBlockTransferService: Error while uploading block input-15-1457357969730 
java.io.IOException: Connection from ip-***-22.ec2.internal/N.N.N.22:40791 closed 
     at org.apache.spark.network.client.TransportResponseHandler.channelUnregistered(TransportResponseHandler.java:124) 
     at org.apache.spark.network.server.TransportChannelHandler.channelUnregistered(TransportChannelHandler.java:94) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     a.io.IOException: Connection from ip-***-22.ec2.internal/N.N.N.22:40791 closed 
     at org.apache.spark.network.client.TransportResponseHandler.channelUnregistered(TransportResponseHandler.java:124) 
     at org.apache.spark.network.server.TransportChannelHandler.channelUnregistered(TransportChannelHandler.java:94) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:739) 
     at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:659) 
     at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) 
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) 
     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) 
     at java.lang.Thread.run(Thread.java:745) 
t io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) 
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) 
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) 
     at io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:739) 
     at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:659) 
     at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) 
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) 
     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) 
     at java.lang.Thread.run(Thread.java:745) 

अग्रिम धन्यवाद ...

+0

क्या आपको कोई समाधान या समस्या मिली? –

+0

हाय। नहीं, हमें अभी भी कोई समाधान नहीं मिला है। समस्या केवल बड़ी संख्या में किनेसिस शर्ड्स (120) के साथ पुन: उत्पन्न होती है। केवल तभी निष्पादक उछाल शुरू करते हैं। विचार? धन्यवाद – visitor

+0

स्पार्क 2.0.0 (ईएमआर 5.0.0) में निष्पादक उछाल बंद कर दिया गया। एक ही समस्या के लंबे भाग को रोकने में एक नई समस्या मिली: http: // stackoverflow।com/प्रश्न/39289345/चिंगारी स्ट्रीमिंग-2-0-0-फ्रीज़ के बाद कई दिन अंडर लोड – visitor

उत्तर

-2

मैं .. एक ही नस में कुछ मिलता है जब एक माइक्रो बैच की तुलना में अधिक ले लिया इसे पूरा करने के लिए 120 सेकंड, इसे निकाल दिया गया:

16/03/14 22:57:30 INFO SparkStreaming$: Batch size: 2500, total read: 4287800 
16/03/14 22:57:35 INFO SparkStreaming$: Batch size: 2500, total read: 4290300 
16/03/14 22:57:42 INFO SparkStreaming$: Batch size: 2500, total read: 4292800 
16/03/14 22:57:45 INFO SparkStreaming$: Batch size: 2500, total read: 4295300 
16/03/14 22:59:45 ERROR ContextCleaner: Error cleaning broadcast 11251 
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout 
     at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48) 
     at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63) 
     at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
     at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) 
     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76) 
     at org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:136) 
     at org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228) 
     at org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45) 
     at org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:67) 
     at org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:233) 
     at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:189) 
     at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:180) 
     at scala.Option.foreach(Option.scala:236) 
     at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:180) 
     at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1180) 
     at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:173) 
     at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:68) 
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [120 seconds] 
     at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) 
     at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) 
     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) 
     at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) 
     at scala.concurrent.Await$.result(package.scala:107) 
     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) 
     ... 12 more 

मैं स्थानीय मोड में चल रहा हूं और उपभोग कर रहा हूं रोम Kinesis। मैं या तो किसी भी प्रसारण चर का उपयोग नहीं कर रहा हूँ।

संबंधित मुद्दे