为了账号安全,请及时绑定邮箱和手机立即绑定

Flink standalonesession集群注意事项

标签:
Flink

在Flink跑批量任务的时候遇到了好多问题,在这里给大家分享一下,同时也作为Mark记录。

问题一

如果搭建集群请保持Flink安装在每台机子目录位置一致,否则启动不了,或者关闭不了。
集群中的lib包一定要保持一致,否则容易引发序列化问题。

问题二

请保持每台主机的kerberos认证,否则容易出现下面这种情况

Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR was set.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/bigdata/flink-1.6.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/bigdata/flink-1.6.0/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Starting execution of program
2018-08-14 22:26:48,840 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.datanode.data.dir;  Ignoring.
2018-08-14 22:26:48,840 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.datanode.failed.volumes.tolerated;  Ignoring.
2018-08-14 22:26:48,840 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.namenode.http-address;  Ignoring.
2018-08-14 22:26:48,840 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.namenode.name.dir;  Ignoring.
2018-08-14 22:26:48,840 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.webhdfs.enabled;  Ignoring.
2018-08-14 22:26:48,844 WARN  org.apache.hadoop.conf.Configuration                          - core-site.xml:an attempt to override final parameter: fs.defaultFS;  Ignoring.
2018-08-14 22:26:48,930 INFO  org.apache.hadoop.security.UserGroupInformation               - Login successful for user hdfs-flink@dounine.com using keytab file /etc/security/keytabs/hdfs.headless.keytab
2018-08-14 22:26:49,052 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.datanode.data.dir;  Ignoring.
2018-08-14 22:26:49,052 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.datanode.failed.volumes.tolerated;  Ignoring.
2018-08-14 22:26:49,052 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.namenode.http-address;  Ignoring.
2018-08-14 22:26:49,052 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.namenode.name.dir;  Ignoring.
2018-08-14 22:26:49,052 WARN  org.apache.hadoop.conf.Configuration                          - hdfs-site.xml:an attempt to override final parameter: dfs.webhdfs.enabled;  Ignoring.
2018-08-14 22:26:49,054 WARN  org.apache.hadoop.conf.Configuration                          - core-site.xml:an attempt to override final parameter: fs.defaultFS;  Ignoring.
2018-08-14 22:26:49,346 WARN  org.apache.hadoop.hdfs.BlockReaderLocal                       - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: Job failed. (JobID: d9f605aebc798ab5f23145ff7de39b1f)
    at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:267)
    at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486)
    at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:474)
    at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
    at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:816)
    at org.apache.flink.api.java.DataSet.count(DataSet.java:398)
    at com.dounine.flink.Hdfs.main(Hdfs.java:144)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
    at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
    at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
    at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
    at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
    at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
    at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
    at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
    at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
Caused by: java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "storm3.demo.com/10.3.111.4"; destination host is: "storm5.demo.com":8020; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
    at org.apache.hadoop.ipc.Client.call(Client.java:1414)
    at org.apache.hadoop.ipc.Client.call(Client.java:1363)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy16.getBlockLocations(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
    at com.sun.proxy.$Proxy16.getBlockLocations(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:219)
    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1142)
    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1132)
    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1122)
    at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:264)
    at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:231)
    at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:224)
    at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1295)
    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:296)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764)
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
    at org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.open(HadoopInputFormatBase.java:187)
    at org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.open(HadoopInputFormatBase.java:59)
    at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:170)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
    at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:677)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
    at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:640)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:724)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
    at org.apache.hadoop.ipc.Client.call(Client.java:1381)
    ... 29 more
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
    at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411)
    at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:550)
    at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:367)
    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:716)
    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:712)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:711)
    ... 32 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
    at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
    at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
    at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
    at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
    at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
    at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
    ... 41 more

解决方案

添加定时任务认证kerberos
crontab -e
添加如下任务并保存

0 */1 * * * kinit -kt /etc/security/keytabs/admin.keytab admin/admin



作者:dounine
链接:https://www.jianshu.com/p/a5a756e69867


点击查看更多内容
TA 点赞

若觉得本文不错,就分享一下吧!

评论

作者其他优质文章

正在加载中
  • 推荐
  • 评论
  • 收藏
  • 共同学习,写下你的评论
感谢您的支持,我会继续努力的~
扫码打赏,你说多少就多少
赞赏金额会直接到老师账户
支付方式
打开微信扫一扫,即可进行扫码打赏哦
今天注册有机会得

100积分直接送

付费专栏免费学

大额优惠券免费领

立即参与 放弃机会
意见反馈 帮助中心 APP下载
官方微信

举报

0/150
提交
取消