Pig入门-环境搭建

                                              Pig入门-环境搭建

本文介绍在Linux RedHat + Hadoop2.2.0+JDK1.7的环境下安装pig-0.14.0.

 

一、pig安装包下载

下载地址: http://mirrors.hust.edu.cn/apache/pig/pig-0.14.0/

 

二、安装及配置

1)解压至安装目录

比如:  tar -zxvf pig-0.14.0.tar.gz -C /itcast

2) 配置

编辑  .bash_profile文件

添加

export PIG_INSTALL=/itcast/pig-0.14.0
export PIG_CLASSPATH=$HADOOP_HOME/conf/
export PATH=$PATH:$PIG_INSTALL/bin

 

三、测试

列出当前操作系统中的所有用户

1)将/etc/passwd拷贝至/root目录;

2)运行 pig -x local3)将passwd文件内容装载到pig;

grunt> A = load 'passwd' using PigStorage(':');

4)提取用户名字段;
grunt> B = foreach A generate $0 as id;

5)显示结果。
grunt> dump B; 

 

 

 

Executable: /public/software/gromacs-2023.2-gpu/bin/gmx_mpi Data prefix: /public/software/gromacs-2023.2-gpu Working dir: /public/home/zkj/BSLA_in_Menthol_and_Olecicacid/run1 Command line: gmx_mpi mdrun -deffnm npt-nopr -v -update gpu -ntmpi 0 -ntomp 1 -nb gpu -bonded gpu -gpu_id 0 Back Off! I just backed up npt-nopr.log to ./#npt-nopr.log.5# Reading file npt-nopr.tpr, VERSION 2023.2 (single precision) Changing nstlist from 10 to 100, rlist from 1 to 1.065 Update groups can not be used for this system because there are three or more consecutively coupled constraints ------------------------------------------------------- Program: gmx mdrun, version 2023.2 Source file: src/gromacs/taskassignment/decidegpuusage.cpp (line 786) Function: bool gmx::decideWhetherToUseGpuForUpdate(bool, bool, PmeRunMode, bool, bool, gmx::TaskTarget, bool, const t_inputrec&, const gmx_mtop_t&, bool, bool, bool, bool, bool, const gmx::MDLogger&) Inconsistency in user input: Update task on the GPU was required, but the following condition(s) were not satisfied: The number of coupled constraints is higher than supported in the GPU LINCS code. For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors ------------------------------------------------------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [warn] Epoll MOD(1) on fd 8 failed. Old events were 6; read change was 0 (none); write change was 2 (del): Bad file descriptor [warn] Epoll MOD(4) on fd 8 failed. Old events were 6; read change was 2 (del); write change was 0 (none): Bad file descriptor
03-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值