Coding

The Coding plugins and agents are designed to help developers write, evaluate, and analyze code more efficiently. These plugins provide a range of functionalities, from commenting and evaluating code snippets to extracting code from GitHub pages and analyzing Git repositories. By leveraging these plugins, developers can enhance their coding practices, improve code quality, and ensure the security of their codebase.


Table of contents
  1. List of Coding Plugins & Agents
  2. Comment
  3. Evaluate
  4. Get Code from GitHub Page
  5. Git Repository Analysis
  6. Search GitHub Repository

List of Coding Plugins & Agents

Index Title Access Description of the Plugin/Agent Category
1 Comment Paid users only This plugin lets you comment your source code Coding
2 Evaluate Paid users only This plugin lets you evaluate source code Coding
3 Get Code from GitHub Page Paid users only Get the source code from a GitHub page (format example: ) Coding
4 Git Repository Analysis Paid users only Scan a Git repository for security vulnerabilities, performance issues, and other problems. Coding
5 Search GitHub Repository Paid users only Search a GitHub repo for specific text Coding

Comment

The Comment plugin is designed to enhance code readability and maintainability. It allows developers to easily add comments to their codebase, making it more understandable. It is particularly useful for those who have inherited a codebase and want to gain a better understanding of the code or for those who want to improve the documentation of their code.

All you need to do is copy and paste your code snippet into the plugin, select Submit to generate the prompt to comment the code.

Followed by submitting the generated prompt to comment the code.

Example: Provided a code snippet that has no comments or documentation. Give it a try!

import pandas as pd

def filter_and_sum_function(file_path, column_filters, delimiter=None, sum_columns=None):
    
    if file_path.endswith('.xlsx'):
        user_file = pd.read_excel(file_path)
    elif file_path.endswith('.txt') and delimiter:
        user_file = pd.read_csv(file_path, delimiter=delimiter)
    else:
        raise ValueError("Unsupported file type or missing delimiter for text files.")

    print("DataFrame before filtering:")
    display(user_file.head())

    start_count = len(user_file)

    for column, filters in column_filters.items():
        if column not in user_file.columns:
            print(f"Warning: Column '{column}' does not exist in the DataFrame.")
            continue
        
        filter_condition = pd.Series([False] * len(user_file))

        for filter_value in filters:
            if pd.api.types.is_string_dtype(user_file[column]):
                filter_condition |= user_file[column].str.contains(filter_value, case=False, na=False)
            elif pd.api.types.is_numeric_dtype(user_file[column]):
                try:
                    filter_value_numeric = float(filter_value)
                    filter_condition |= (user_file[column] == filter_value_numeric)
                except ValueError:
                    print(f"Warning: '{filter_value}' is not a valid number for column '{column}'.")
            else:
                print(f"Warning: Column '{column}' has unsupported data type for filtering.")

        user_file = user_file[filter_condition]

    print("Filtered DataFrame:")
    display(user_file)

    count = len(user_file)

    total_sums = {col: user_file[col].sum() for col in sum_columns} if sum_columns else None

    # Output results
    print(f"Total number of rows in the file before filtering: {start_count}")
    print(f'Count of filtered entries: {count}')
    if sum_columns:
        for col, total in total_sums.items():
            print(f'Total sum of {col} in filtered entries: {total}')

file = 'your_file.xlsx'

column_filters = {}
while True:
    column_input = input('Enter a column to filter (or type "done" to finish): ')
    if column_input.lower() == 'done':
        break
    filters_input = input(f'Enter filters for {column_input} (comma-separated): ')
    filters = [filter.strip() for filter in filters_input.split(',')]
    column_filters[column_input.strip()] = filters

sum_columns_input = input('What column(s) would you like to sum? (comma-separated) ')
sum_columns = [col.strip() for col in sum_columns_input.split(',')]

filter_and_sum_function(file, column_filters, sum_columns=sum_columns)

After running the Comment plugin, the code snippet was commented as follows:

import pandas as pd

def filter_and_sum_function(file_path, column_filters, delimiter=None, sum_columns=None):
    """
    This function reads a file, filters the data based on user-defined column filters,
    and calculates the sum of specified columns in the filtered data.

    Parameters:
    - file_path (str): Path to the file to be processed.
    - column_filters (dict): Dictionary containing column names as keys and a list of filter values as values.
    - delimiter (str, optional): Delimiter used in case of a text file. Default is None.
    - sum_columns (list, optional): List of column names to calculate the sum. Default is None.

    Raises:
    - ValueError: If the file type is not supported or if a delimiter is missing for text files.

    Returns:
    - None
    """

    # Read the file based on the file type
    if file_path.endswith('.xlsx'):
        user_file = pd.read_excel(file_path)
    elif file_path.endswith('.txt') and delimiter:
        user_file = pd.read_csv(file_path, delimiter=delimiter)
    else:
        raise ValueError("Unsupported file type or missing delimiter for text files.")

    # Display the DataFrame before filtering
    print("DataFrame before filtering:")
    display(user_file.head())

    # Get the initial count of rows in the DataFrame
    start_count = len(user_file)

    # Apply column filters
    for column, filters in column_filters.items():
        if column not in user_file.columns:
            print(f"Warning: Column '{column}' does not exist in the DataFrame.")
            continue
        
        filter_condition = pd.Series([False] * len(user_file))

        for filter_value in filters:
            if pd.api.types.is_string_dtype(user_file[column]):
                filter_condition |= user_file[column].str.contains(filter_value, case=False, na=False)
            elif pd.api.types.is_numeric_dtype(user_file[column]):
                try:
                    filter_value_numeric = float(filter_value)
                    filter_condition |= (user_file[column] == filter_value_numeric)
                except ValueError:
                    print(f"Warning: '{filter_value}' is not a valid number for column '{column}'.")
            else:
                print(f"Warning: Column '{column}' has unsupported data type for filtering.")

        user_file = user_file[filter_condition]

    # Display the filtered DataFrame
    print("Filtered DataFrame:")
    display(user_file)

    # Get the count of rows in the filtered DataFrame
    count = len(user_file)

    # Calculate the sum of specified columns in the filtered DataFrame
    total_sums = {col: user_file[col].sum() for col in sum_columns} if sum_columns else None

    # Output results
    print(f"Total number of rows in the file before filtering: {start_count}")
    print(f'Count of filtered entries: {count}')
    if sum_columns:
        for col, total in total_sums.items():
            print(f'Total sum of {col} in filtered entries: {total}')

# Example usage
file = 'your_file.xlsx'

column_filters = {}
while True:
    column_input = input('Enter a column to filter (or type "done" to finish): ')
    if column_input.lower() == 'done':
        break
    filters_input = input(f'Enter filters for {column_input} (comma-separated): ')
    filters = [filter.strip() for filter in filters_input.split(',')]
    column_filters[column_input.strip()] = filters

sum_columns_input = input('What column(s) would you like to sum? (comma-separated) ')
sum_columns = [col.strip() for col in sum_columns_input.split(',')]

filter_and_sum_function(file, column_filters, sum_columns=sum_columns)

Users don’t need to worry about the formatting of the comments as the plugin will automatically format the comments for you. Additionally, the plugin is able to identify the code language and generate comments accordingly.


Evaluate

The Evaluate plugin is designed to help developers evaluate code snippets. It is particularly useful for those who want to quickly see what improvements can be made to the code snippet.

All you need to do is copy and paste your code snippet into the plugin, select Submit to generate the prompt to comment the code.

Followed by submitting the generated prompt to comment the code.

Example: We utilized the same code snippet as the Comment plugin.

After running the Evaluate plugin, the code snippet was evaluated and improved as follows:

Ask Sage provides a detailed explanation of the improvements that can be made to the code snippet. This can help developers understand the best practices and coding standards that should be followed.

import pandas as pd
import argparse
import logging

logging.basicConfig(level=logging.INFO)

def filter_and_sum_function(file_path: str, column_filters: dict, delimiter: str = None, sum_columns: list = None):
    """
    Filters a DataFrame based on given column filters and calculates the sum of specified columns.

    Parameters:
    file_path (str): Path to the input file.
    column_filters (dict): Dictionary where keys are column names and values are lists of filter values.
    delimiter (str, optional): Delimiter for reading text files. Defaults to None.
    sum_columns (list, optional): List of columns to sum. Defaults to None.
    """
    try:
        if file_path.endswith('.xlsx'):
            user_file = pd.read_excel(file_path)
        elif file_path.endswith('.txt') and delimiter:
            user_file = pd.read_csv(file_path, delimiter=delimiter)
        else:
            raise ValueError("Unsupported file type or missing delimiter for text files.")
    except Exception as e:
        logging.error(f"Error reading file: {e}")
        return

    logging.info("DataFrame before filtering:")
    logging.info(user_file.head())

    start_count = len(user_file)

    for column, filters in column_filters.items():
        if column not in user_file.columns:
            logging.warning(f"Column '{column}' does not exist in the DataFrame.")
            continue
        
        filter_condition = pd.Series([False] * len(user_file))

        for filter_value in filters:
            if pd.api.types.is_string_dtype(user_file[column]):
                filter_condition |= user_file[column].str.contains(filter_value, case=False, na=False)
            elif pd.api.types.is_numeric_dtype(user_file[column]):
                try:
                    filter_value_numeric = float(filter_value)
                    filter_condition |= (user_file[column] == filter_value_numeric)
                except ValueError:
                    logging.warning(f"'{filter_value}' is not a valid number for column '{column}'.")
            else:
                logging.warning(f"Column '{column}' has unsupported data type for filtering.")

        user_file = user_file[filter_condition]

    logging.info("Filtered DataFrame:")
    logging.info(user_file)

    count = len(user_file)

    total_sums = {col: user_file[col].sum() for col in sum_columns} if sum_columns else None

    # Output results
    logging.info(f"Total number of rows in the file before filtering: {start_count}")
    logging.info(f'Count of filtered entries: {count}')
    if sum_columns:
        for col, total in total_sums.items():
            logging.info(f'Total sum of {col} in filtered entries: {total}')

def main():
    parser = argparse.ArgumentParser(description="Filter and sum columns in a file.")
    parser.add_argument('file_path', type=str, help='Path to the input file.')
    parser.add_argument('--delimiter', type=str, help='Delimiter for reading text files.', default=None)
    parser.add_argument('--sum_columns', type=str, help='Comma-separated list of columns to sum.', default='')
    args = parser.parse_args()

    column_filters = {}
    while True:
        column_input = input('Enter a column to filter (or type "done" to finish): ')
        if column_input.lower() == 'done':
            break
        filters_input = input(f'Enter filters for {column_input} (comma-separated): ')
        filters = [filter.strip() for filter in filters_input.split(',')]
        column_filters[column_input.strip()] = filters

    sum_columns = [col.strip() for col in args.sum_columns.split(',')] if args.sum_columns else None

    filter_and_sum_function(args.file_path, column_filters, delimiter=args.delimiter, sum_columns=sum_columns)

if __name__ == "__main__":
    main()

You can re-run the Evaluate plugin to see further improvements that can be made to the code snippet after each iteration. This not only helps with code improvement but also increases the productivity of developers.


Get Code from GitHub Page

The Get Code from GitHub Page plugin is designed to help developers extract code snippets from GitHub pages quickly. There are many instances where developers need to refer to code snippets from GitHub repositories, and this plugin makes it easy to do so.

The steps to use the Get Code from GitHub Page plugin are as follows:

Step 1 - Create a Ask Sage secret to store your Git repository credentials.

The format of the secret should be as follows:
Full Name, Email, and Github Access Token - which is entered in the following format into the Ask Sage prompt window:

/add-secret John Doe|||john-doe@example.com|||ghp_1234567890

Ask Sage secrets are encrypted and securely stored. You can use secrets to store sensitive information such as API keys, passwords, and other credentials.

Step 2 - Navigate to the Ask Sage Prompt Settings section and select Prompt Templates. Followed by then selecting the Get Code from GitHub Page.

Step 3 - Select the secret you created and enter the URL of the GitHub file you want to extract code from.

Step 4 - Click submit to run the plugin to write the prompt and then submit the prompt to extract the code from the GitHub page.

The result will have the code snippet extracted from the GitHub page. The result will render a code sandbox to view the code snippet in a more readable format.


Git Repository Analysis

The Git Repository Analysis plugin scans a Git repository for security vulnerabilities, performance issues, and other problems. This plugin is useful for developers who want to ensure the quality and security of their codebase.

Step 1 - Navigate to the Ask Sage Prompt Settings section and select Prompt Templates. Followed by then selecting the Git Repository Analysis.

Step 2 - Select the secret you created and enter the URL of the Git repository you want to analyze. And check the box for ‘token consumption’ to verify the tokens consumption only.

If you need to create a secret, you can follow the steps outlined in the Get Code from GitHub Page plugin section above.

Step 3 - Click submit to run the plugin.

Step 4 - Review the results from the plugin.

Step 5 - Re-run the plugin but do not check the box for ‘token consumption’ to get a full analysis of the repository & this time check the ‘Commit (separate branch) and PR (GitHub only) code changes (requires Secret)’

Step 6 - Upon completion you will get a separate branch created in the repository with the analysis results that will be PR’d into the main branch. Will require repository owner to approve the PR.


Search GitHub Repository

The Search GitHub Repository plugin is designed to help developers search a GitHub repository for specific code snippets. This plugin is useful for developers who want to quickly find code snippets in a large codebase or repository.

This does not go through documentation or README files, only the codebase.

Step 1 - Navigate to the Ask Sage Prompt Settings section and select Prompt Templates. Followed by then selecting the Search GitHub Repository.

Step 2 - Select the secret you created and enter the URL of the GitHub repository you want to search. And enter the text you want to search for in the repository.

Step 3 - Click submit to run the plugin and have the prompt generated. Followed by submitting the prompt to search the GitHub repository.

Step 4 - Review the results from the plugin.

Example: Searching for the field bank_total_accounts in a repository for a fictious bank.

The plugin identifies and returns the number of files containing the specified search context, along with the exact lines of code where the term appears. This functionality enables developers to swiftly locate relevant code snippets within a large codebase, providing insights into the frequency and context of the term’s usage. This helps in understanding not only the location of the code but also its purpose and significance within the project.


Back to top

Copyright © 2024 Ask Sage Inc. All Rights Reserved.