我是靠谱客的博主 缥缈蜡烛,最近开发中收集的这篇文章主要介绍MR自定义分区的实现,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

需求:

将统计结果按照不同的条件输出到不同的文件中。
MR默认使用的是Hash分区,容易造成数据倾斜。为此,我们可以使用自定义分区避免。

代码实现:

1.自定义分区类,继承Partitioner类

package com.aura.hadoop.partitioner;

import com.aura.hadoop.flow.bean.FlowBean;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Partitioner;

/**
 * @author panghu
 * @description 自定义分区
 * 手机号136、137、138、139开头都分别放到一个独立的4个文件中,其他开头的放到一个文件中
 * <p>
 * 要分区的数据是从maptask发出的数据,所以k-v类型就是map端的输出类型
 * @create 2021-02-15-11:22
 */
public class MyPartitioner extends Partitioner<Text, FlowBean> {
    @Override
    public int getPartition(Text text, FlowBean flowBean, int i) {
        String phoneHead = text.toString().substring(0, 3);
        switch (phoneHead) {
            case "136":
                return 0;
            case "137":
                return 1;
            case "138":
                return 2;
            case "139":
                return 3;
            default:
                return 4;
        }
    }
}

2.Driver类中指定分区个数和自定义分区类。

package com.aura.hadoop.partitioner;

import com.aura.hadoop.flow.FlowMapper;
import com.aura.hadoop.flow.FlowReducer;
import com.aura.hadoop.flow.bean.FlowBean;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

/**
 * @author panghu
 * @description 自定义分区的使用
 * @create 2021-02-15-11:29
 */
public class FlowDriver {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Job job = Job.getInstance(new Configuration());

        job.setJarByClass(FlowBean.class);

        job.setMapperClass(FlowMapper.class);
        job.setReducerClass(FlowReducer.class);

        // 设置分区数量并指定分区类
        job.setNumReduceTasks(5);
        job.setPartitionerClass(MyPartitioner.class);

        // map端和reduce端输出类型一致可以只设置reduce端输出类型
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(FlowBean.class);

        FileInputFormat.setInputPaths(job, new Path("D:\data\hadoopdata\flow.txt"));
        FileOutputFormat.setOutputPath(job, new Path("D:\data\out\myPartitioner"));

        boolean b = job.waitForCompletion(true);
        System.exit(b ? 0 : 1);

    }
}

3.下面是本项目其他需要用到的类。

package com.aura.hadoop.flow.bean;


import org.apache.hadoop.io.Writable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

/**
 * @author panghu
 * @description 自定义序列化类
 * @create 2021-02-14-16:49
 */
public class FlowBean implements Writable {
    private Long upFlow;
    private Long downFlow;
    private Long sumFlow;

    public void set(Long upFlow, Long downFlow) {
        this.upFlow = upFlow;
        this.downFlow = downFlow;
        this.sumFlow = upFlow + downFlow;
    }

    public Long getUpFlow() {
        return upFlow;
    }

    public void setUpFlow(Long upFlow) {
        this.upFlow = upFlow;
    }

    public Long getDownFlow() {
        return downFlow;
    }

    public void setDownFlow(Long downFlow) {
        this.downFlow = downFlow;
    }

    public Long getSumFlow() {
        return sumFlow;
    }

    public void setSumFlow(Long sumFlow) {
        this.sumFlow = sumFlow;
    }

    @Override
    public String toString() {
        return upFlow + "t" + downFlow + "t" + sumFlow;
    }

    /**
     * 把要序列化的对象的属性发送给框架
     *
     * @throws IOException
     */
    @Override
    public void write(DataOutput dataOutput) throws IOException {
        dataOutput.writeLong(upFlow);
        dataOutput.writeLong(downFlow);
        dataOutput.writeLong(sumFlow);
    }

    /**
     * 填充序列化的对象属性,读写顺序要一致
     *
     * @param dataInput
     * @throws IOException
     */
    @Override
    public void readFields(DataInput dataInput) throws IOException {
        this.upFlow = dataInput.readLong();
        this.downFlow = dataInput.readLong();
        this.sumFlow = dataInput.readLong();
    }
}

package com.aura.hadoop.flow;

import com.aura.hadoop.flow.bean.FlowBean;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

/**
 * @author panghu
 * @description
 * @create 2021-02-14-16:48
 */
public class FlowMapper extends Mapper<LongWritable,Text,Text,FlowBean>{
    private Text k = new Text();
    private FlowBean flow = new FlowBean();
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        // 拿到一行数据
        String line = value.toString();
        String[] split = line.split("t");
        String phone = split[1];
        k.set(phone);
        flow.set(Long.parseLong(split[split.length-3]),Long.parseLong(split[split.length-2]));
        context.write(k,flow);
    }
}

package com.aura.hadoop.flow;

import com.aura.hadoop.flow.bean.FlowBean;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

/**
 * @author panghu
 * @description
 * @create 2021-02-14-16:49
 */
public class FlowReducer extends Reducer<Text,FlowBean,Text,FlowBean>{
    private Text k = new Text();
    private FlowBean flow = new FlowBean();
    @Override
    protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
        Long upFlow = 0L;
        Long downFlow = 0L;
        Long sumFlow = 0L;
        for (FlowBean value : values) {
            upFlow += value.getUpFlow();
            downFlow += value.getDownFlow();
            sumFlow += value.getSumFlow();
        }
        k.set(key);
        flow.set(upFlow,downFlow);
        context.write(k,flow);
    }
}

最后

以上就是缥缈蜡烛为你收集整理的MR自定义分区的实现的全部内容,希望文章能够帮你解决MR自定义分区的实现所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(43)

评论列表共有 0 条评论

立即
投稿
返回
顶部