public class MapRedUtil extends Object
| Modifier and Type | Field and Description |
|---|---|
static String |
FILE_SYSTEM_NAME |
| Constructor and Description |
|---|
MapRedUtil() |
| Modifier and Type | Method and Description |
|---|---|
static FileSpec |
checkLeafIsStore(PhysicalPlan plan,
PigContext pigContext) |
static void |
copyTmpFileConfigurationValues(org.apache.hadoop.conf.Configuration fromConf,
org.apache.hadoop.conf.Configuration toConf) |
static List<org.apache.hadoop.fs.FileStatus> |
getAllFileRecursively(List<org.apache.hadoop.fs.FileStatus> files,
org.apache.hadoop.conf.Configuration conf)
Get all files recursively from the given list of files
|
static List<List<org.apache.hadoop.mapreduce.InputSplit>> |
getCombinePigSplits(List<org.apache.hadoop.mapreduce.InputSplit> oneInputSplits,
long maxCombinedSplitSize,
org.apache.hadoop.conf.Configuration conf) |
static long |
getPathLength(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus status) |
static long |
getPathLength(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus status,
long max)
Returns the total number of bytes for this file, or if a directory all
files in the directory.
|
String |
inputSplitToString(org.apache.hadoop.mapreduce.InputSplit[] splits) |
static <E> Map<E,Pair<Integer,Integer>> |
loadPartitionFileFromLocalCache(String keyDistFile,
Integer[] totalReducers,
byte keyType,
org.apache.hadoop.conf.Configuration mapConf)
Loads the key distribution sampler file
|
static void |
setupStreamingDirsConfMulti(PigContext pigContext,
org.apache.hadoop.conf.Configuration conf)
Sets up output and log dir paths for a multi-store streaming job
|
static void |
setupStreamingDirsConfSingle(POStore st,
PigContext pigContext,
org.apache.hadoop.conf.Configuration conf)
Sets up output and log dir paths for a single-store streaming job
|
static void |
setupUDFContext(org.apache.hadoop.conf.Configuration job) |
public static final String FILE_SYSTEM_NAME
public static <E> Map<E,Pair<Integer,Integer>> loadPartitionFileFromLocalCache(String keyDistFile, Integer[] totalReducers, byte keyType, org.apache.hadoop.conf.Configuration mapConf) throws IOException
keyDistFile - the name for the distribution filetotalReducers - gets set to the total number of reducers as found in the dist filekeyType - Type of the key to be stored in the return map. It currently treats Tuple as a special case.IOExceptionpublic static void copyTmpFileConfigurationValues(org.apache.hadoop.conf.Configuration fromConf,
org.apache.hadoop.conf.Configuration toConf)
public static void setupUDFContext(org.apache.hadoop.conf.Configuration job)
throws IOException
IOExceptionpublic static void setupStreamingDirsConfSingle(POStore st, PigContext pigContext, org.apache.hadoop.conf.Configuration conf) throws IOException
st - - POStore of the current jobpigContext - conf - IOExceptionpublic static void setupStreamingDirsConfMulti(PigContext pigContext, org.apache.hadoop.conf.Configuration conf) throws IOException
pigContext - conf - IOExceptionpublic static FileSpec checkLeafIsStore(PhysicalPlan plan, PigContext pigContext) throws ExecException
ExecExceptionpublic static List<org.apache.hadoop.fs.FileStatus> getAllFileRecursively(List<org.apache.hadoop.fs.FileStatus> files, org.apache.hadoop.conf.Configuration conf) throws IOException
files - a list of FileStatusconf - the configuration objectIOExceptionpublic static long getPathLength(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus status)
throws IOException
IOExceptionpublic static long getPathLength(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus status,
long max)
throws IOException
fs - FileSystemstatus - FileStatusmax - Maximum value of total length that will trigger exit. Many
times we're only interested whether the total length of files is greater
than X or not. In such case, we can exit the function early as soon as
the max is reached.IOExceptionpublic static List<List<org.apache.hadoop.mapreduce.InputSplit>> getCombinePigSplits(List<org.apache.hadoop.mapreduce.InputSplit> oneInputSplits, long maxCombinedSplitSize, org.apache.hadoop.conf.Configuration conf) throws IOException, InterruptedException
IOExceptionInterruptedExceptionpublic String inputSplitToString(org.apache.hadoop.mapreduce.InputSplit[] splits) throws IOException, InterruptedException
IOExceptionInterruptedExceptionCopyright © 2007-2017 The Apache Software Foundation