ARI Statis Events Query

Hello,

I want to capture all events pertaining to a call right from call connect to call hangup for a stasis app, which I see on asterisk console when I enable ARI debugging all for any stasis application ,

Can I achieve this using any python library since my primary choice is python?

Thanks in advance.

Regards
CJ

Yes, you can achieve this using the Asterisk REST Interface (ARI) and a Python library for interacting with ARI. ARI provides a mechanism for controlling Asterisk via HTTP, enabling you to build applications that can handle various call events.

One popular Python library for interacting with ARI is ari-py. This library allows you to write Python applications that can subscribe to events and perform actions based on those events.

  1. Install ari-py: You can install the ari-py library using pip:
pip install ari-py
  1. Write your ARI application: Create a Python script that connects to the Asterisk ARI interface, subscribes to the events you’re interested in (such as StasisStart and StasisEnd), and handles those events.Here’s a basic example:

from ari import Client

def on_start(channel, event):
print(“Call connected:”, event)

def on_end(channel, event):
print(“Call ended:”, event)

def main():
client = Client(“http://<ari_username>:<ari_password>@<asterisk_ip>:8088/ari”)
client.on_channel_event(“StasisStart”, on_start)
client.on_channel_event(“StasisEnd”, on_end)
client.run()

if name == “main”:
main()

Replace <ari_username>, <ari_password>, and <asterisk_ip> with the appropriate credentials and IP address of your Asterisk server.
3. Run your ARI application: Execute your Python script, and it will start listening for events from Asterisk.

`python your_script.py`
  1. Handling events: Your application will now receive events whenever a call starts (StasisStart) or ends (StasisEnd). You can perform any desired actions based on these events, such as logging call details, processing call recordings, or triggering additional actions.

i hope it will helpful for you

Best Regard
Danish Hafeez | QA Assistant
ICTInnovations

Thanks @danishhafeez,

actually I have that in stasis application but in the events (i.e. in my python script I am not getting the events like playback , recording started , recording end events )I am just getting call connect & disconnect events.

where as in asterisk console I see all events, any specific settings to get all events in the script,

Regards
CJ

To receive all events in your Python script from an Asterisk server, you need to ensure that you have configured your Asterisk Manager Interface (AMI) properly and that your Python script subscribes to the correct events.

In your Python script, use the appropriate method to subscribe to the events you want to receive. This is typically done by sending an Action request to the Asterisk server after establishing the AMI connection.

Here’s a basic example using the pyst library:

from pyst import Manager

def event_listener(event, manager):
    print("Received event:", event.name)

manager = Manager()
manager.connect('hostname', 5038)  # Replace with your Asterisk server details
manager.login('username', 'password')  # Replace with your AMI credentials

# Subscribe to events
manager.register_event('*', event_listener)

manager.run()

In this example, '*' subscribes to all events. You can replace it with specific event names if you only want to receive certain types of events.

Ensure that your event listener function (event_listener in the example) is correctly configured to handle the events you’re interested in.

@Chetang.Jha You’re not being specific enough about what exactly you are doing in the Python script. If you see with “ari set debug all on” events going to your Websocket, then they’re getting to your Python script and so it would be on the Python side. You’ve provided no information on what you’re using for Python to accomplish ARI functionality, so nothing else can really be said.

@danishhafeez This post is not helpful or relevant to this thread. This thread is about ARI, not AMI. I urge you to check what solution you’re using to answer people and see if the information is actually relevant and useful.

1 Like

i also give answer about ARI check carefully

Your first post was about ARI. When the op followed up, you responded about AMI which has nothing to do with ARI events.

Hi @jcolp, @danishhafeez

It is about ARI , but I think I got the answer in @danishhafeez first post but the issue I am not getting all events when using ARI API to get all events.

http://x.x.x.:8088/ari/events?app=stasisapp&subscribeAll=true&api_key=user:pass

so my query was how do i capture all the events right from call connect to call hangup.

Regards
CJ

If you pass subscribeAll then you will be subscribed to all events from every channel from everything and they will be sent to your ARI application, including from channels that aren’t even in your ARI application.

As I stated if events are showing up in the debug as going to your websocket, they’re being sent and the problem is in your Python script.

And to elaborate further - if that is the case then without knowing what your Python script is, what library/approach it is using, etc, then the best anyone could do is guess.

Thanks @jcolp

I am using aioari python library[GitHub - M-o-a-T/aioari: Library for accessing the Asterisk REST Interface],

App details as below
I have a python stasis application which records user input and does speech recognition & plays TTS prompt accordingly,

and there is a entire flow based on which the application interacts with the caller, now I want to capture all the events to display the in call status realtime for this stasis app on a dashboard [using python only same library which is AIOARI]

let me know if you want me to share the script as well.

Regards
CJ

I have no experience with that library personally, but the more information you provide the more likely people are to help and the better the help can be - that includes providing your script.

@jcolp here is the code snippet

#!/usr/bin/python3

import asyncio
import aioari
import logging
import speech_recognition as sr
import random
import time
import math
import json
import requests
import logging
from text2speech import *
from check_keys import *
from bot_logger_config import *
from db_logger import *
from datetime import datetime
from aiohttp.web_exceptions import HTTPError, HTTPNotFound

import os
ast_url = os.getenv(“AST_URL”,“http://localhost:8088/”)
ast_user = os.getenv(“AST_USR”,“ariuser”)
ast_pass = os.getenv(“AST_PASS”,“aripass”)

define an endpoint for originating call and patching it to caller

ast_outgoing = os.getenv(“AST_OUTGOING”, ‘PJSIP/102’)

Define your ARI application

ast_app = os.getenv(“AST_APP”, “voice_bot_app”)

#log to file function
log_me = bot_logger_config(ast_app)

#bot name hard coded for now
bot_name = ‘bot_sample_1’
holding_bridge = None
now = datetime.now()

digits = [i for i in range(0, 10)]
random_str = “”
for i in range(6):
index = math.floor(random.random() *10)
random_str += str(digits[index])

Function to handle the ARI StasisStart event

async def stasis_start(objs, event):
channel = objs[‘channel’]
print(f"StasisStart: {channel.json}")
channel_name = channel.json
chan_name = json.dumps(channel_name)
chan_name1 = json.loads(chan_name)
global chan_call_id
global chan_fin_name
global chan_fin_num

chan_call_id = chan_name1['id']
chan_fin_name = chan_name1['name']
chan_fin_num = chan_name1['caller']['number']

logger_message = 'Incoming Call from'+chan_fin_num
logging.warning(logger_message)
log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)

logger_message= 'Channel Created'+chan_fin_name
logging.warning(logger_message)
log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)

logger_message='Call ID'+chan_call_id
logging.warning(logger_message)
log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)


# Answer the incoming call

await channel.answer()


logging.warning('Call Answered by bot')
tts_text  = 'Welcome to Acme financial Services, I am your virtual assistant, How may I help you today'
get_t2sp = create_text_2_speech(tts_text)
#print(get_t2sp)
logger_message = 'Bot Playing Welcome Prompt' + get_t2sp
logging.warning(logger_message)
log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)

menu_select_prompt = 'Please let me know from the options , Credit Card, Home Loan, Car Loan, Over Draft, Loan Transfer'
get_menu_t2sp = create_text_2_speech_selection(menu_select_prompt)
logger_message = 'Bot Playing Menu Prompt '+ get_menu_t2sp
logging.warning(logger_message)
log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)

# Play a prompt asking the user to speak
playback = await channel.play(media="sound:"+get_t2sp)

#playback selection menu for caller 
playback = await channel.play(media="sound:"+get_menu_t2sp)

time.sleep(14)

logger_message = 'Awating callers response'
logging.warning(logger_message)
log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)

# Record the user's voice
audio_file = random_str
global fin_audio 
fin_audio ='/var/spool/asterisk/recording/'+audio_file+'.wav'
print(fin_audio)
logger_message = 'Caller Recording stored at' + fin_audio
logging.warning(logger_message)
log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)

await channel.record(
        name=audio_file,
        format='wav',
        ifExists='overwrite',
        maxDurationSeconds='10',
        maxSilenceSeconds='5',
        beep='false',
        terminateOn='*',
        )
time.sleep(15)

## create a global bridge
## used to transfer call to live agent
global holding_bridge
bridges = [b for b in (await client.bridges.list())
        if b.json['bridge_type'] == 'holding']
if bridges:
    holding_bridge = bridges[0]
    print ("Using bridge %s" % holding_bridge.id)
else:
    holding_bridge = await client.bridges.create(type='holding')
    print ("Created bridge %s" % holding_bridge.id)


async def safe_hangup(channel):
    try:
        await channel.hangup()
    except HTTPError as e:
        if e.response.status_code != HTTPNotFound.status_code:
            raise  
        
async def transfer_to_queue():
    print('Transferring to Live Agent')
    logging.warning('Transferring to Live Agent')
    eve_args = event['args']
    if event['args'] == ['dialed']:
        return
    logging.warning('received event %s'% eve_args)
    
    incoming = objs['channel']
    print('I am here %s'% incoming)
    await incoming.answer()
    p = await incoming.play(media="sound:pls-wait-connect-call")
    print(p)
    await asyncio.sleep(2)
    h = await holding_bridge.addChannel(channel=incoming.id)
    print(h)
    
     # Originate the outgoing channel
    outgoing = await client.channels.originate(endpoint=ast_outgoing, app=ast_app, appArgs="dialed")
    print("OUT:",outgoing)
    
    outgoing.on_event('ChannelDestroyed',lambda *args: safe_hangup(incoming))
    
    
    #print("Bridging",channel)
    print('Bridging', outgoing)
    #bridge = holding_bridge
    bridge = await client.bridges.create(type='mixing')
    await outgoing.answer()
    print(incoming.id)
    print(outgoing.id)
    incom_add = await bridge.addChannel(channel=incoming.id)
    print(incom_add)
    logging.warning('adding incoming channel ', incom_add)
    #await bridge.addChannel(channel=[incoming.id, outgoing.id])
    print('Bridged',incoming,outgoing)
    

async def bot_2_human():
    menu_un = 'I am unable to understand you, please wait while I transfer your call to my Human partner'
    mn_un = create_text_2_speech_general(menu_un)
    playback = await channel.play(media='sound:'+mn_un)
    #t2q = await transfer_to_queue()
    t2q = await transfer_to_queue()

async def go_to_menus(gtext):
    ##get_menus = create_text_2_speech_menu(gtext)
    menu_ok = 'Ok, I can help you with'+ gtext
    get_menus_ok = create_text_2_speech_menu(menu_ok)
    playback = await channel.play(media='sound:'+get_menus_ok)

    if(gtext == "home loan"):
        hl = await home_loans()
        hl_msg = create_text_2_speech_general(hl)
        playback = await channel.play(media='sound:'+hl_msg)
        logger_message = 'Bot Response for Selection ' + gtext +':'+ hl
        logging.warning(logger_message)
        log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        logger_message = 'Bot Call Hangup' + log_time
        logging.warning(logger_message)
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        time.sleep(10)
        await channel.hangup()
    elif(gtext == "credit card"):
        ccr = await credit_card()
        cc_msg = create_text_2_speech_general(ccr)
        playback = await channel.play(media='sound:'+cc_msg)
        logger_message = 'Bot Response for Selection ' + gtext +':'+ ccr
        logging.warning(logger_message)
        log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        logger_message = 'Bot Call Hangup' + log_time
        logging.warning(logger_message)
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        time.sleep(10)
        await channel.hangup()
    elif(gtext == "car loan"):
        crln = await car_loan()
        crln_msg = create_text_2_speech_general(crln)
        playback = await channel.play(media='sound:'+crln_msg)
        logger_message = 'Bot Response for Selection ' + gtext +':'+ crln
        logging.warning(logger_message)
        log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        logger_message = 'Bot Call Hangup' + log_time
        logging.warning(logger_message)
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        time.sleep(10)
        await channel.hangup()
    elif(gtext == "over draft"):
        ovdr = await over_draft()
        od_msg = create_text_2_speech_general(ovdr)
        playback = await channel.play(media='sound:'+od_msg)
        logger_message = 'Bot Response for Selection ' + gtext +':'+ ovdr
        logging.warning(logger_message)
        log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        logger_message = 'Bot Call Hangup' + log_time
        logging.warning(logger_message)
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        time.sleep(10)
        await channel.hangup()
    elif(gtext == "loan transfer"):
        lntr = await loan_transfer()
        lntr_msg = create_text_2_speech_general(lntr)
        playback = await channel.play(media='sound:'+lntr_msg)
        logger_message = 'Bot Response for Selection ' + gtext +':'+ lntr
        logging.warning(logger_message)
        log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        logger_message = 'Bot Call Hangup' + log_time
        logging.warning(logger_message)
        getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        time.sleep(10)
        await channel.hangup()
    else:
        bt2h = await bot_2_human()

async def recognize_speech(final_audio):
    recognizer = sr.Recognizer()
    with sr.AudioFile(final_audio) as source:
        audio_data = recognizer.record(source)
        try:
            text = recognizer.recognize_google(audio_data)
            logger_message = 'Caller Spoke ' + text
            logging.warning(logger_message)
            log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
            getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
            gt_key = await bot_keys(text)
                            
            if gt_key != text:
                logger_message = 'No Response defined, Bot will transfer call to Human'
                logging.warning(logger_message)
                log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
                getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
                hum_tr = await bot_2_human()
            elif gt_key == text:
                logger_message='Valid Response , Bot will proceed for Menus'
                logging.warning(logger_message)
                log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
                getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
                await go_to_menus(text)
        except sr.UnknownValueError:
            print("Speech Recognition could not understand audio")
            logger_message = 'Speech Recognition could not understand audio'
            logging.warning(logger_message)
            log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
            getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)
        except sr.RequestError as e:
            print("Unknown exception")
            logger_message="Unknown exception"
            logging.warning(logger_message)
            log_time = now.strftime(("%d/%m/%Y %H:%M:%S"))
            getdb_bot_logger_events(bot_name,chan_fin_num,logger_message,log_time,chan_call_id)

recspeech = await recognize_speech(fin_audio)

Function to handle the ARI StasisEnd event

async def stasis_end(channel, event):
#print(“END”, channel, event)
print(f"StasisEnd: {channel.json}")
await holding_bridge.destroy()

sessions = {}

loop = asyncio.get_event_loop()
client = loop.run_until_complete(aioari.connect(ast_url,ast_user,ast_pass))

client.on_channel_event(“StasisStart”,stasis_start)
client.on_channel_event(“StasisEnd”, stasis_end)

loop.run_until_complete(client.run(apps=ast_app))

@jcolp,

This is the events I am talking about ,

<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“StasisStart”,“timestamp”:“2024-03-20T12:19:13.754+0530”,“args”:,“channel”:{“id”:“1710917345.15”,“name”:“PJSIP/tcl_sip-00000005”,“state”:“Up”,“protocol_id”:“26483463-a6f7-4b4f-b7ff-dd74c02e7632”,“caller”:{“name”:“09702243283”,“number”:“09702243283”},“connected”:{“name”:“09702243283”,“number”:“09702243283”},“accountcode”:“”,“dialplan”:{“context”:“reminder-in”,“exten”:“4825274”,“priority”:11,“app_name”:“Stasis”,“app_data”:“reminder_bot”},“creationtime”:“2024-03-20T12:19:05.060+0530”,“language”:“en”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackStarted”,“timestamp”:“2024-03-20T12:19:14.088+0530”,“playback”:{“id”:“6bc95af6-906c-4310-b776-9ce77f5f24f6”,“media_uri”:“sound:general”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“playing”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
– <PJSIP/tcl_sip-00000005> Playing ‘general.slin’ (language ‘en’)
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackFinished”,“timestamp”:“2024-03-20T12:19:17.849+0530”,“playback”:{“id”:“6bc95af6-906c-4310-b776-9ce77f5f24f6”,“media_uri”:“sound:general”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“done”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackStarted”,“timestamp”:“2024-03-20T12:19:17.849+0530”,“playback”:{“id”:“5cbad746-3309-4f78-a1fc-01587ae33acb”,“media_uri”:“sound:tcl_select_menus”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“playing”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
– <PJSIP/tcl_sip-00000005> Playing ‘tcl_select_menus.slin’ (language ‘en’)
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackFinished”,“timestamp”:“2024-03-20T12:19:19.610+0530”,“playback”:{“id”:“5cbad746-3309-4f78-a1fc-01587ae33acb”,“media_uri”:“sound:tcl_select_menus”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“done”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackStarted”,“timestamp”:“2024-03-20T12:19:19.610+0530”,“playback”:{“id”:“4d50f5b1-31d0-43b0-8b0c-bd60e6bb4223”,“media_uri”:“sound:general_123”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“playing”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
– <PJSIP/tcl_sip-00000005> Playing ‘general_123.slin’ (language ‘en’)
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackFinished”,“timestamp”:“2024-03-20T12:19:23.231+0530”,“playback”:{“id”:“4d50f5b1-31d0-43b0-8b0c-bd60e6bb4223”,“media_uri”:“sound:general_123”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“done”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackStarted”,“timestamp”:“2024-03-20T12:19:23.231+0530”,“playback”:{“id”:“ac491f20-56e2-4cea-9e84-102798f7f39e”,“media_uri”:“sound:general_digits”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“playing”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
– <PJSIP/tcl_sip-00000005> Playing ‘general_digits.slin’ (language ‘en’)
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackFinished”,“timestamp”:“2024-03-20T12:19:26.371+0530”,“playback”:{“id”:“ac491f20-56e2-4cea-9e84-102798f7f39e”,“media_uri”:“sound:general_digits”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“done”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackStarted”,“timestamp”:“2024-03-20T12:19:26.371+0530”,“playback”:{“id”:“1c61aeb2-02d9-4657-9d42-597e118120e6”,“media_uri”:“sound:tcl_welcome”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“playing”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
– <PJSIP/tcl_sip-00000005> Playing ‘tcl_welcome.slin’ (language ‘en’)
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackFinished”,“timestamp”:“2024-03-20T12:19:29.112+0530”,“playback”:{“id”:“1c61aeb2-02d9-4657-9d42-597e118120e6”,“media_uri”:“sound:tcl_welcome”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“done”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
<— Sending ARI event to 127.0.0.1:50874 —>
{“type”:“PlaybackStarted”,“timestamp”:“2024-03-20T12:19:29.112+0530”,“playback”:{“id”:“61f5a903-ce2f-4e8e-b433-6fec994c8979”,“media_uri”:“sound:currency_digits”,“target_uri”:“channel:1710917345.15”,“language”:“en”,“state”:“playing”},“asterisk_id”:“0a:65:1d:c5:3d:2a”,“application”:“reminder_bot”}
– <PJSIP/tcl_sip-00000005> Playing ‘currency_digits.slin’ (language ‘en’)
<— Sending ARI event to 127.0.0.1:50874 —>

which is displayed when i enabled ari set debug on

Regards
CJ

So events are getting sent. Your question is really how to use the library at this point, which as I stated I have no experience with. Since noone else has answered, you’re probably going to need to go look at the library, learn how it works, and experiment.