Skip to content

Map input split Python | Example code

  • by

Using Map input split to get multiple input values from the user in one line in Python. Here, the code will query the user for input, then split it into words, convert these words into integers, and unpack it into two variables x and y.

x, y = map(int, input().split())

It works as follows:

  1. input() will query the user for input, and read one line of user input;
  2. .split() will split that input into a list of “words”;
  3. map(int, ...) will call int on each word, it will to that lazily (although that is not important here); and
  4. x, y = ... will unpack the expression into two elements, and assign the first one to n and the second one to S.

Example Map input split in Python

A simple example code reads two numbers from input and typecasts them to int using the map function in Python.

x, y = map(int, input("Enter 2 number with space: ").split())

print("First Number: ", x)
print("Second Number: ", y)

Output:

Map input split Python

In Python, if you’re working with Hadoop MapReduce jobs, you typically don’t directly manage input splits. The Hadoop framework handles the splitting of the input data for you, and your mapper function will receive these splits as input.

However, if you’re looking for a way to simulate or understand the concept of input splits in a non-Hadoop context, you can create a simple function to divide your data into smaller chunks for processing. Here’s an example of how you can create input splits in Python:

def generate_input_splits(data, split_size):
    """
    Function to generate input splits from a given list 'data' with a specific 'split_size'.
    
    Parameters:
        data (list): The input data to be split.
        split_size (int): The size of each split.
        
    Returns:
        list: A list of input splits, where each split is a sublist of 'data'.
    """
    input_splits = [data[i:i + split_size] for i in range(0, len(data), split_size)]
    return input_splits

You can use this function to create input splits of your data and then process each split independently in a parallel manner, similar to how a mapper in Hadoop would process input splits.

Here’s an example of how you can use the function:

input_data = list(range(1, 101))  # Sample input data from 1 to 100
split_size = 10  # Size of each input split

input_splits = generate_input_splits(input_data, split_size)

for split in input_splits:
    # Process each split using your mapper function or any other processing logic
    print("Processing input split:", split)
    # Your mapper logic here...

Comment if you have any doubts or suggestions on this Python input code.

Note: IDE: PyCharm 2021.3.3 (Community Edition)

Windows 10

Python 3.10.1

All Python Examples are in Python 3, so Maybe its different from python 2 or upgraded versions.

Leave a Reply

Your email address will not be published. Required fields are marked *