Re: Fail to deploy Flink on minikube

2020-09-02 Thread superainbower
Hi Till,
This is the taskManager log
As you see, the logs print  ‘line 92 -- Could not connect to 
flink-jobmanager:6123’
then print ‘line 128 --Could not resolve ResourceManager address 
akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*, retrying in 
1 ms: Could not connect to rpc endpoint under address 
akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*.’   And 
repeat print this


A few minutes later, the taskmanger shut down and restart


This is my yaml files, could u help me to confirm did I omitted something? 
Thanks a lot!
---
flink-configuration-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: flink-config
  labels:
app: flink
data:
  flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
queryable-state.proxy.ports: 6125
jobmanager.memory.process.size: 1024m
taskmanager.memory.process.size: 1024m
parallelism.default: 1
  log4j-console.properties: |+
rootLogger.level = INFO
rootLogger.appenderRef.console.ref = ConsoleAppender
rootLogger.appenderRef.rolling.ref = RollingFileAppender
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
appender.console.name = ConsoleAppender
appender.console.type = CONSOLE
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{-MM-dd HH:mm:ss,SSS} %-5p %-60c %x 
- %m%n
appender.rolling.name = RollingFileAppender
appender.rolling.type = RollingFile
appender.rolling.append = false
appender.rolling.fileName = ${sys:log.file}
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{-MM-dd HH:mm:ss,SSS} %-5p %-60c %x 
- %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 10
logger.netty.name = 
org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF
---
jobmanager-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: flink-jobmanager
spec:
  type: ClusterIP
  ports:
  - name: rpc
port: 6123
  - name: blob-server
port: 6124
  - name: webui
port: 8081
  selector:
app: flink
component: jobmanager
--
jobmanager-session-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: flink-jobmanager
spec:
  replicas: 1
  selector:
matchLabels:
  app: flink
  component: jobmanager
  template:
metadata:
  labels:
app: flink
component: jobmanager
spec:
  containers:
  - name: jobmanager
image: registry.cn-hangzhou.aliyuncs.com/superainbower/flink:1.11.1
args: ["jobmanager"]
ports:
- containerPort: 6123
  name: rpc
- containerPort: 6124
  name: blob-server
- containerPort: 8081
  name: webui
livenessProbe:
  tcpSocket:
port: 6123
  initialDelaySeconds: 30
  periodSeconds: 60
volumeMounts:
- name: flink-config-volume
  mountPath: /opt/flink/conf
securityContext:
  runAsUser:   # refers to user _flink_ from official flink image, 
change if necessary
  volumes:
  - name: flink-config-volume
configMap:
  name: flink-config
  items:
  - key: flink-conf.yaml
path: flink-conf.yaml
  - key: log4j-console.properties
path: log4j-console.properties
  imagePullSecrets:
- name: regcred
---
taskmanager-session-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: flink-taskmanager
spec:
  replicas: 1
  selector:
matchLabels:
  app: flink
  component: taskmanager
  template:
metadata:
  labels:
app: flink
component: taskmanager
spec:
  containers:
  - name: taskmanager
image: registry.cn-hangzhou.aliyuncs.com/superainbower/flink:1.11.1
args: ["taskmanager"]
ports:
- containerPort: 6122
  name: rpc
- containerPort: 6125
  name: query-state
livenessProbe:
  tcpSocket:
port: 6122
  initialDelaySeconds: 30
  periodSeconds: 60
volumeMounts:

Re: Fail to deploy Flink on minikube

2020-09-02 Thread superainbower
Hi Till,
I find something may be helpful.
The kubernetes Dashboard show job-manager ip 172.18.0.5, task-manager ip 
172.18.0.6
When I run command 'kubectl exec -ti flink-taskmanager-74c68c6f48-jqpbn -- 
/bin/bash’ && ‘ping 172.18.0.5’ 
I can get response
But when I ping flink-jobmanager ,there is no response


| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制


On 09/3/2020 09:03,superainbower wrote:
Hi Till,
This is the taskManager log
As you see, the logs print  ‘line 92 -- Could not connect to 
flink-jobmanager:6123’
then print ‘line 128 --Could not resolve ResourceManager address 
akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*, retrying in 
1 ms: Could not connect to rpc endpoint under address 
akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*.’   And 
repeat print this


A few minutes later, the taskmanger shut down and restart


This is my yaml files, could u help me to confirm did I omitted something? 
Thanks a lot!
---
flink-configuration-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: flink-config
  labels:
app: flink
data:
  flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
queryable-state.proxy.ports: 6125
jobmanager.memory.process.size: 1024m
taskmanager.memory.process.size: 1024m
parallelism.default: 1
  log4j-console.properties: |+
rootLogger.level = INFO
rootLogger.appenderRef.console.ref = ConsoleAppender
rootLogger.appenderRef.rolling.ref = RollingFileAppender
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
appender.console.name = ConsoleAppender
appender.console.type = CONSOLE
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{-MM-dd HH:mm:ss,SSS} %-5p %-60c %x 
- %m%n
appender.rolling.name = RollingFileAppender
appender.rolling.type = RollingFile
appender.rolling.append = false
appender.rolling.fileName = ${sys:log.file}
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{-MM-dd HH:mm:ss,SSS} %-5p %-60c %x 
- %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 10
logger.netty.name = 
org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF
---
jobmanager-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: flink-jobmanager
spec:
  type: ClusterIP
  ports:
  - name: rpc
port: 6123
  - name: blob-server
port: 6124
  - name: webui
port: 8081
  selector:
app: flink
component: jobmanager
--
jobmanager-session-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: flink-jobmanager
spec:
  replicas: 1
  selector:
matchLabels:
  app: flink
  component: jobmanager
  template:
metadata:
  labels:
app: flink
component: jobmanager
spec:
  containers:
  - name: jobmanager
image: registry.cn-hangzhou.aliyuncs.com/superainbower/flink:1.11.1
args: ["jobmanager"]
ports:
- containerPort: 6123
  name: rpc
- containerPort: 6124
  name: blob-server
- containerPort: 8081
  name: webui
livenessProbe:
  tcpSocket:
port: 6123
  initialDelaySeconds: 30
  periodSeconds: 60
volumeMounts:
- name: flink-config-volume
  mountPath: /opt/flink/conf
securityContext:
  runAsUser:   # refers to user _flink_ from official flink image, 
change if necessary
  volumes:
  - name: flink-config-volume
configMap:
  name: flink-config
  items:
  - key: flink-conf.yaml
path: flink-conf.yaml
  - key: log4j-console.properties
path: log4j-console.properties
  imagePullSecrets:
- name: regcred
---
taskmanager-session-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: flink-taskmanager
spec:
  replicas: 1
  selector:
matchLabels:
  app: flink
  component: taskmanager
  template:
metadata:
  labels:
app: flink
component: taskmanager
spec:
  containers:
  

Re: Fail to deploy Flink on minikube

2020-09-02 Thread superainbower
HI Yang,
I update taskmanager-session-deployment.yaml like this:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: flink-taskmanager
spec:
  replicas: 1
  selector:
matchLabels:
  app: flink
  component: taskmanager
  template:
metadata:
  labels:
app: flink
component: taskmanager
spec:
  containers:
  - name: taskmanager
image: registry.cn-hangzhou.aliyuncs.com/superainbower/flink:1.11.1
args: ["taskmanager","-Djobmanager.rpc.address=172.18.0.5"]
ports:
- containerPort: 6122
  name: rpc
- containerPort: 6125
  name: query-state
livenessProbe:
  tcpSocket:
port: 6122
  initialDelaySeconds: 30
  periodSeconds: 60
volumeMounts:
- name: flink-config-volume
  mountPath: /opt/flink/conf/
securityContext:
  runAsUser:   # refers to user _flink_ from official flink image, 
change if necessary
  volumes:
  - name: flink-config-volume
configMap:
  name: flink-config
  items:
  - key: flink-conf.yaml
path: flink-conf.yaml
  - key: log4j-console.properties
path: log4j-console.properties
  imagePullSecrets:
- name: regcred


And Delete the TaskManager pod and restart it , but the logs print this


Could not resolve ResourceManager address 
akka.tcp://flink@172.18.0.5:6123/user/rpc/resourcemanager_*, retrying in 1 
ms: Could not connect to rpc endpoint under address 
akka.tcp://flink@172.18.0.5:6123/user/rpc/resourcemanager_*


It change flink-jobmanager to 172.18.0.5 
| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制


On 09/3/2020 11:09,Yang Wang wrote:
I guess something is wrong with your kube proxy, which causes TaskManager could 
not connect to JobManager.
You could verify this by directly using JobManager Pod ip instead of service 
name.


Please do as follows.
* Edit the TaskManager deployment(via kubectl edit flink-taskmanager) and 
update the args field to the following.
   args: ["taskmanager", "-Djobmanager.rpc.address=172.18.0.5"]Given that 
"172.18.0.5" is the JobManager pod ip.
* Delete the current TaskManager pod and let restart again
* Now check the TaskManager logs to check whether it could register successfully






Best,
Yang


superainbower  于2020年9月3日周四 上午9:35写道:

Hi Till,
I find something may be helpful.
The kubernetes Dashboard show job-manager ip 172.18.0.5, task-manager ip 
172.18.0.6
When I run command 'kubectl exec -ti flink-taskmanager-74c68c6f48-jqpbn -- 
/bin/bash’ && ‘ping 172.18.0.5’ 
I can get response
But when I ping flink-jobmanager ,there is no response


| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制


On 09/3/2020 09:03,superainbower wrote:
Hi Till,
This is the taskManager log
As you see, the logs print  ‘line 92 -- Could not connect to 
flink-jobmanager:6123’
then print ‘line 128 --Could not resolve ResourceManager address 
akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*, retrying in 
1 ms: Could not connect to rpc endpoint under address 
akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*.’   And 
repeat print this


A few minutes later, the taskmanger shut down and restart


This is my yaml files, could u help me to confirm did I omitted something? 
Thanks a lot!
---
flink-configuration-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: flink-config
  labels:
app: flink
data:
  flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
queryable-state.proxy.ports: 6125
jobmanager.memory.process.size: 1024m
taskmanager.memory.process.size: 1024m
parallelism.default: 1
  log4j-console.properties: |+
rootLogger.level = INFO
rootLogger.appenderRef.console.ref = ConsoleAppender
rootLogger.appenderRef.rolling.ref = RollingFileAppender
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
appender.console.name = ConsoleAppender
appender.console.type = CONSOLE
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{-MM-dd HH:mm:ss,SSS} %-5p %-60c %x 
- %m%n
appender.rolling.name = RollingFileAppender
appender.rolling.type = RollingFile
appender.rolling.append = false
appender.rolling.fileName = ${sys:log.file}
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{-MM-dd HH:mm:s

Re: Fail to deploy Flink on minikube

2020-09-03 Thread superainbower
Hi Till & Yang,
I can deploy Flink on kubernetes(not minikube), it works well
So there are some problem about my minikube but I can’t find and fix it
Anyway I can deploy on k8s now
Thanks for your help!
| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制


On 09/3/2020 15:47,Till Rohrmann wrote:
In order to exclude a Minikube problem, you could also try to run Flink on an 
older Minikube and an older K8s version. Our end-to-end tests use Minikube 
v1.8.2, for example.


Cheers,
Till


On Thu, Sep 3, 2020 at 8:44 AM Yang Wang  wrote:

Sorry i forget that the JobManager is binding its rpc address to 
flink-jobmanager, not the ip address.
So you need to also update the jobmanager-session-deployment.yaml with 
following changes.



...
  containers:
  - name: jobmanager
env:
- name: JM_IP
  valueFrom:
fieldRef:
  apiVersion: v1
  fieldPath: status.podIP
image: flink:1.11
args: ["jobmanager", "$(JM_IP)"]
...


After then the JobManager is binding the rpc address with its ip.


Best,
Yang





superainbower  于2020年9月3日周四 上午11:38写道:

HI Yang,
I update taskmanager-session-deployment.yaml like this:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: flink-taskmanager
spec:
  replicas: 1
  selector:
matchLabels:
  app: flink
  component: taskmanager
  template:
metadata:
  labels:
app: flink
component: taskmanager
spec:
  containers:
  - name: taskmanager
image: registry.cn-hangzhou.aliyuncs.com/superainbower/flink:1.11.1
args: ["taskmanager","-Djobmanager.rpc.address=172.18.0.5"]
ports:
- containerPort: 6122
  name: rpc
- containerPort: 6125
  name: query-state
livenessProbe:
  tcpSocket:
port: 6122
  initialDelaySeconds: 30
  periodSeconds: 60
volumeMounts:
- name: flink-config-volume
  mountPath: /opt/flink/conf/
securityContext:
  runAsUser:   # refers to user _flink_ from official flink image, 
change if necessary
  volumes:
  - name: flink-config-volume
configMap:
  name: flink-config
  items:
  - key: flink-conf.yaml
path: flink-conf.yaml
  - key: log4j-console.properties
path: log4j-console.properties
  imagePullSecrets:
- name: regcred


And Delete the TaskManager pod and restart it , but the logs print this


Could not resolve ResourceManager address 
akka.tcp://flink@172.18.0.5:6123/user/rpc/resourcemanager_*, retrying in 1 
ms: Could not connect to rpc endpoint under address 
akka.tcp://flink@172.18.0.5:6123/user/rpc/resourcemanager_*


It change flink-jobmanager to 172.18.0.5 
| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制


On 09/3/2020 11:09,Yang Wang wrote:
I guess something is wrong with your kube proxy, which causes TaskManager could 
not connect to JobManager.
You could verify this by directly using JobManager Pod ip instead of service 
name.


Please do as follows.
* Edit the TaskManager deployment(via kubectl edit flink-taskmanager) and 
update the args field to the following.
   args: ["taskmanager", "-Djobmanager.rpc.address=172.18.0.5"]Given that 
"172.18.0.5" is the JobManager pod ip.
* Delete the current TaskManager pod and let restart again
* Now check the TaskManager logs to check whether it could register successfully






Best,
Yang


superainbower  于2020年9月3日周四 上午9:35写道:

Hi Till,
I find something may be helpful.
The kubernetes Dashboard show job-manager ip 172.18.0.5, task-manager ip 
172.18.0.6
When I run command 'kubectl exec -ti flink-taskmanager-74c68c6f48-jqpbn -- 
/bin/bash’ && ‘ping 172.18.0.5’ 
I can get response
But when I ping flink-jobmanager ,there is no response


| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制


On 09/3/2020 09:03,superainbower wrote:
Hi Till,
This is the taskManager log
As you see, the logs print  ‘line 92 -- Could not connect to 
flink-jobmanager:6123’
then print ‘line 128 --Could not resolve ResourceManager address 
akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*, retrying in 
1 ms: Could not connect to rpc endpoint under address 
akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*.’   And 
repeat print this


A few minutes later, the taskmanger shut down and restart


This is my yaml files, could u help me to confirm did I omitted something? 
Thanks a lot!
---
flink-configuration-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: flink-config
  labels:
app: flink
data:
  flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.por

Flink on k8s

2020-09-29 Thread superainbower
Hi
How to configure statebackend when I deploy flink on k8s , I just add the 
following to flink-conf.yaml, but it doesn’t work


state.backend: rocksdb
state.checkpoints.dir: hdfs://slave2:8020/flink/checkpoints
state.savepoints.dir: hdfs://slave2:8020/flink/savepoints
state.backend.incremental: true


| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制



Re:Flink on k8s

2020-09-30 Thread superainbower
And I got this error log 


Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: 
Hadoop is not in the classpath/dependencies.
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could 
not find a file system implementation for scheme 'hdfs'. The scheme is not 
directly supported by Flink and no Hadoop file system to support this scheme 
could be loaded. For a full list of supported file systems, please 
seehttps://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.




| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制


On 09/30/2020 14:48,superainbower wrote:
Hi
How to configure statebackend when I deploy flink on k8s , I just add the 
following to flink-conf.yaml, but it doesn’t work


state.backend: rocksdb
state.checkpoints.dir: hdfs://slave2:8020/flink/checkpoints
state.savepoints.dir: hdfs://slave2:8020/flink/savepoints
state.backend.incremental: true


| |
superainbower
|
|
superainbo...@163.com
|
签名由网易邮箱大师定制



Re: Flink Kuberntes Libraries

2020-10-12 Thread superainbower
Hi Till,
Could u tell me how to configure HDFS as statebackend when I deploy flink on 
k8s?
I try to add the following to flink-conf.yaml


state.backend: rocksdb
state.checkpoints.dir: hdfs://slave2:8020/flink/checkpoints
state.savepoints.dir: hdfs://slave2:8020/flink/savepoints
state.backend.incremental: true


And add flink-shaded-hadoop2-2.8.3-1.8.3.jar to /opt/flink/lib


But It doesn’t work and I got this error logs


Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could 
not find a file system implementation for scheme 'hdfs'. The scheme is not 
directly supported by Flink and no Hadoop file system to support this scheme 
could be loaded. For a full list of supported file systems, please 
seehttps://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.


Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: 
Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in the 
classpath, or some classes are missing from the classpath


Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.flink.runtime.util.HadoopUtils
On 10/09/2020 22:13, Till Rohrmann wrote:
Hi Saksham,


if you want to extend the Flink Docker image you can find here more details 
[1]. 


If you want to include the library in your user jar, then you have to add the 
library as a dependency to your pom.xml file and enable the shade plugin for 
building an uber jar [2].


[1] 
https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/docker.html#advanced-customization
[2] 
https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html


Cheers,
Till


On Fri, Oct 9, 2020 at 3:22 PM saksham sapra  wrote:

Thanks Till for helping out,


The way you suggested, is it possible to copy libs which is in D directory to 
FLINK_HOME/libs. I tried to run a copy command : copy D:/data/libs to 
FLINK_HOME/libs and it gets copied but i dont how can i check where it gets 
copied and this libs is taken by flink?




Thanks,
Saksham Sapra


On Wed, Oct 7, 2020 at 9:40 PM Till Rohrmann  wrote:

HI Saksham,


the easiest approach would probably be to include the required libraries in 
your user code jar which you submit to the cluster. Using maven's shade plugin 
should help with this task. Alternatively, you could also create a custom Flink 
Docker image where you add the required libraries to the FLINK_HOME/libs 
directory. This would however mean that every job you submit to the Flink 
cluster would see these libraries in the system class path.


Cheers,
Till


On Wed, Oct 7, 2020 at 2:08 PM saksham sapra  wrote:

Hi ,


i have made some configuration using this link page 
:https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/kubernetes.html.
and i am able to run flink on UI , but i need to submit a job using : 
http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy/#/submit
 through POstman, and i have some libraries which in local i can add in libs 
folder but in this how can i add my libraries so that it works properly.