There are two types of PL/SQL Procedures. Those are named and unnamed. The Named procs you can use for the stand-alone task. You can use unnamed procs for ad-hoc work and in the Shell scripts.

Anonymous Block (Unnamed)

The procedure which we don’t give is called the Anonymous procedure. There is no name over here for the procedure.

Read: How to execute a stored procedure in SQL developer

[DECLARE
… optional declaration statements …]
BEGIN
… executable statements …
[EXCEPTION
… optional exception handler statements …]
END;

Sample anonymous block

Here’s an example of Anonymous PL/SQL code.

SET SERVEROUTPUT ON; 
DECLARE
 V_MYNUMBER NUMBER(2) := 1;
BEGIN
DBMS_OUTPUT.PUT.LINE('MY INPUT IS : ' V_NUMBER); 
END;

Named PL/SQL block

Here is an example named PL/SQL block. Here the name of the procedure is pl.

Read: How to write Lookup query in PL/SQL

create or replace PROCEDURE pl(aiv_text in varchar2 )
is
begin
 DBMS_OUTPUT.put_line(aiv_text);
end;
/
execute pl('my input srini');
drop procedure pl;

Here are key takeout

  • The differences in the named PL/SQL block are it has the syntax of ‘CREATE or REPLACE PROCEDURE’ and IS.
  • Variables are declared inside after the procedure name.
  • The execute the command you can use to call the procedure. The drop procedure command you can use to drop it.

Read: How to write UDF to check input value is number or not

Recent posts

  • AWS Interview Q&A for Beginners (Must Watch!)

    AWS Interview Q&A for Beginners (Must Watch!)

    The content outlines essential AWS basics interview questions that every beginner should be familiar with. It serves as a resource for fresh candidates preparing for interviews in cloud computing. The link provided leads to additional multimedia content related to the topic.

  • How a PySpark Job Executes: Understanding Statements, Stages, and Tasks

    How a PySpark Job Executes: Understanding Statements, Stages, and Tasks

    When you write a few lines of PySpark code, Spark executes a complex distributed workflow behind the scenes. Many data engineers know how to write PySpark, but fewer truly understand how statements become stages, stages become tasks, and tasks run on partitions. This blog demystifies the internal execution model of Spark by connecting these four…

  • Azure Data Factory (ADF): The Complete Beginner-Friendly Guide (2026 Edition)

    Azure Data Factory (ADF): The Complete Beginner-Friendly Guide (2026 Edition)

    Azure Data Factory (ADF) is Microsoft’s fully managed, cloud-based data integration and orchestration service. It helps you collect data from different sources, transform it at scale, and load it into your preferred analytics or storage systems. Whether you are working with Azure SQL, on-premises databases, SaaS applications, or big-data systems, ADF gives you a unified…