Common Callable Data Validation Modules
It’s inherent in a lean company that data is validated at the first point of input. However, this has been difficult to achieve until the advent of service-oriented architecture leveraging web services and other integration tools. This would allow for a central repository of common field validation routines that can be called by internal Boeing Accounting Systems’ processes, as well as source systems.
This project will provide a solution that may be utilized by the Boeing Enterprise Accounting System to offer a more robust data validation module, resulting in improved data quality and reduced resources required to analyze and correct the data.
The Boeing Accounting System receives accounting transaction data from many different systems on multiple platforms. This includes information from Payroll, Timekeeping, Labor, Material, Accounts Payables, etc. The current design involves source systems sending data to the Accounting System. All data validation is accomplished in the target environment. Any data that fails validation is then defaulted resulting in the subsequent need to clean up the suspended data. Suspended data results in a delay for the data to be assigned to the proper Chargeline and requires a time-consuming effort to be cleaned up.
The goal of the proposed project is that data should be validated via an open-sourced mechanism allowing the source systems (on many different platforms) to call the validation modules from within their online data entry system. Not all source systems carry all the fields that ultimately are sent to the target system. This will require the architecture to recognize which fields should be validated based on which fields are available in the call.
The end result should be a data validation module that is accessible from online data entry systems which can recognize which validations are to be performed based on the fields that are passed to it. The online system would pass the field names and field values to the validation module. The validation module would perform applicable validation procedures and then return a pass/fail indication to the source system.
An example of this situation would be when a person logs into the Purchasing/Payable system to buy some stationary. The Purchasing/Payable system is separate from the Accounting system, and feeds a batch journal to the Accounting system weekly. The person ordering the stationary is required to enter an account and a department that will be charged for the stationary. When that account and department are sent to the Accounting system at the end of the week, along with a variety of other fields that are assigned later, both are checked to ensure they are valid (that they exist on the associated validation table), and a third check is done that the department is allowed to be charged with that account. This project would create a way for the Purchasing/Payable system to call those same three edits (account; department; department to account) and do the validation with the available fields back when it’s entered.
Note that while the Accounting System is built on the PeopleSoft application (a COTS packaged from Oracle), the intent of this project is to build a generic edit module. The back-end validation and the calling of those modules will be written in a language outside of PeopleSoft.
- Determination of Technology
- Evaluate and select the appropriate technology to build the prototype
- This would be determined in the initial meetings between the students and Boeing and will be dependent on the existing knowledge and skill sets of the students
- Validation Module Prototype
- Design and build a prototype of the Common Callable Data Validation module that would be available to be called from online data entry systems.
- The end result should be a module that is accessible from online data entry systems which can recognize which validations to perform based on the fields that are passed to it. The online system would pass the field names and field values to the validation module, the validation module would perform the applicable checks and then return a pass/fail indication to the source system.
- For the purposes of the prototype, the online data entry system can just be a shell system (or program) to call the Validation module.
- Integration Services
- Design and build a Publish/Subscribe integration
- Ability to integrate with multiple platforms including PeopleSoft Financials application on an Oracle database
Additional Information To Be Provided
These will be provided on request as part of the initial meetings with the students.
- Table and field descriptions and layouts
- List of sample data validation procedures to be included in the prototype
As part of the project, the expected deliverables would be:
- Design document
- Data Validation Module design
- Integration Services design
- Prototype code
- Unit test plan and results
Signed NDA & IP Agreements Required From Each Member of Student Team
Back to the top...
- Deutsche Bank Global Technology, Inc.
Visualization of Customer Information
Customer relationship management is the strongest and the most efficient approach in maintaining and creating relationships with customers. Organizations of all kinds utilize Customer Relationship Management Systems (CRM) to track customer information and collect data points. This allows the organization to understand their customer and make informed decisions. Some of the CRM tools used in most of the renowned organizations are Oracle, Salesforce.com, and SAP, to name a few.
Deutsche Bank & CRM
Deutsche Bank has many sales and customer opportunities globally. We currently utilize a CRM system to keep track of most of our data but it is missing many key points that will help DB standout from the competition. Chief among them is a view of a customer that contains contact, schedule, financial, deal, trade, and internal information about them. It would be available to the entire DB sales force and available on multiple platforms to help facilitate the use and increase customer contact. The ability to correlate, update, and analyze these different data points quickly and clearly could lead to increased deals and better relationships with our customers.
This Semester’s Assignment
We are looking for a group of dynamic students who can help design and develop a user-centric CRM system that allows users in the bank to have all their customer information just a few clicks away. Deutsche Bank Sales and Traders need to make quick decisions that can impact their customers and the market, so easily accessible and consumable information is Key. Deutsche Bank will provide required data fields for the CRM system and knowledge of bank practices, but we need your ideas and technical expertise to create a dynamic system.
NCSU Student Experience
Senior Design students in the College of Engineering, Department of Computer Science and students at the College of Design, CODE Studio, will have a unique opportunity to partner together over the course of the semester to develop a user-centric CRM system, as they would experience in real world product development. Additionally, students will have access to top industry professionals to assist in the design principles, Agile/Scrum practices, overall developing/coding and testing.
Back to the top...
In 2011, NC State University was chosen to participate in a prestigious, three-year collegiate engineering competition called “EcoCAR2: Plugging In to the Future.” The founders of this automotive competition – U.S. Department of Energy (DOE) and General Motors (GM) – have challenged NC State and fourteen other universities in the U.S. and Canada to reduce the environmental impact and improve the fuel economy of a 2013 production-line Chevrolet Malibu without sacrificing the vehicle’s performance, consumer acceptability, or safety. More specifically, the technical goals of competition are to construct and demonstrate vehicles and powertrains that, in comparison to production gasoline vehicles:
- Reduce fuel consumption
- Reduce well-to-wheel greenhouse gas emissions
- Reduce criteria tailpipe emissions
- Maintain consumer acceptability in the areas of performance, utility, and safety
For more information about EcoCAR2: Plugging In to the Future, please visit http://www.ecocar2.org/
Currently, in Year 2 of the competition, NC State is implementing the series biodiesel-electric hybrid vehicle architecture they designed last year
A large part of the EcoCAR2 Year 2 Final Competition that occurs in May and concludes Year 2 of the competition is the Static Consumer Acceptability (SCA) event. This event assesses the practicality and appropriateness of the vehicle’s systems, such as instrumental panels, infotainment, and other interior and exterior features, while it is not in motion.
An important goal of the NCSU EcoCar2 project is to improve dashboard layout and functionality of the vehicle (see Figure 1, below). The development of the center stack and Dashboard will be crucial for NC State in the Spring 2013 SCA event.
Thus, the main objective of this CSC 492 project would be to setup and develop a fully-functional infotainment system in the center stack of a standard 2013 Chevrolet Malibu. This software system should be accessible through a touch screen and should replace all functionality that exists in the current vehicle setup. The system must make the vehicle easier to use without compromising the safety of the driver
Figure 1: Current Center Stack and Dashboard of 2013 Chevrolet Malibu
Freescale Semiconductor has provided the NC State EcoCAR2 team with an i.MX platform featuring the required electronics, software, and display needed to develop their programmable center stack unit.
The hardware that has been provided for this effort so far includes the i.MX53 QSB board, which can be used to learn the basics of the i.MX multimedia processors and Freescale Embedded Linux applications, and the i.MX53 ARD board, which can be utilized to harness the Linux application building process, CAN interfacing, and integration of the Freescale hardware with QNX software. Furthermore, the team has been provided with a BlackBerry Playbook, as well as RIM and QNX software, for software training and application development for the center stack. The board that will be ultimately utilized in the vehicle is the i.MX6 SABRE-AI board which will be provided to the team in December of 2012. Also, a new LCD touch screen specifically fit for the intended vehicle operation will be delivered to the teams at the beginning of 2013.
The software platforms available for development include Embedded Linux, Android, QNX, and Windows Embedded Compact. This software and the corresponding development tools, emulation environments, and/or IDEs are widely available online or on-hand from the EcoCAR2 competition sponsors.
This project is no longer available for bidding, as the team for this project has already been formed.
Back to the top...
- Entrepreneur Project
Indie and local music artists are currently at a disadvantage in advertisement compared to mainstream artists who are backed by large corporate record labels. With funding from these corporations, artists are able to easily target audiences and provide advertising across all platforms, ranging from television and radio to Youtube and Google. There are few methods other than word of mouth available for smaller artists to advertise themselves and their shows to fans and new audiences. There are no well-designed methods for users to automatically stay up to date on any local or national artists’ new songs, shows, and changes. Current solutions tend to focus on servicing either fans or artists exclusively, rather than mediating the relationship between the two.
Band Fan will solve these problems for artists by providing an easy to use platform which is freely available for artists of all sizes. It will use an interface to connect to supported social networking sites, allowing for uniform advertisement across all social networks. This will also allow for fans to automatically sync the artists they are already following on their networks with our program.
The project will be broken down into 5 phases, the first of which will be completed in the Spring Semester of 2013 (Spring 13). The 5 phases in order: Incubation Period, Summer Sessions, Profiteering, Toe Stepping, and Metadata.
Incubation period refers to the time we will spend creating the basic features necessary to run and test the program during Spring 13. During this time, we will complete the following primary technical objectives, some of which can be found in the Technical Features section:
- All High Priority Features (found in the Technical Features section)
- As many Medium Priority Features possible (found in the Technical Features section)
- Feature Reviews and Planning
- Cloud Server Selection, Interface Implementation, and User Logins
- Version Control Selection
- Database Setup
- Website Design
- Automated Testing
Secondary entrepreneurial objectives for the first phase include:
- Preliminary Band Support and Requirements Elicitation
- Preliminary Fan Support and Requirements Elicitation
- Initial Product Branding and Artistic Designs
- Software Development Process Established
The completion of this initial phase in its entirety will result in a runnable and testable skeleton for the product that can have additional features added to it. 100% completion of all primary and secondary objectives is ambitious, so there is some leeway allotted for incomplete implementation of Medium Priority features and secondary objectives. There will be no costs incurred during Incubation Period except for a potential server fee, if needed, for Interface Implementation and User Logins.
The end of Spring 13 will usher in the new phase, Summer Sessions. At this point, all active team members will be given the choice whether or not to stay on the team. We will seek out more team members to bring on board as well. The purpose of Summer Sessions will be to get all initial features running for beta testing by all users. For the sake of space in this document, further details of phases 2 through 5 will not be provided.
Back to the top...
- NCSU Health Professions Advising
Health PAC Student Portfolio Redesign/Reprogramming
The Health Professions Advising Center (Health PAC) is dedicated to mentoring students and helping them reach their health care career goals. Health PAC is available to all current undergraduate and graduate students, alumni, as well as students returning to obtain admission credits through postgraduate studies. Health PAC includes a complete advising center, an extensive mentoring website and personalized health career planning with Anita P. Flick, MD.
One of the most critical components of this advising program is the web-based Student Portfolio System which has been developed over the past 4 years by CSC and CALS work study students and staff. The current system allows pre-professional health students to track their academic, clinical, service and social accomplishments during their career at NCSU plus compile letters of recommendation and prepare their files for their application to graduate school. They are also able to share their portfolio information with graduate programs, advisers and employers. At the time of their application, the system is equipped to allow our university Health Professions Review Committee to evaluate and compile what is called a University Committee Letter that is submitted to their graduate programs which has been shown to greatly enhance their acceptance rates. Since implementing this system at NCSU, along with the other components of our 5 Points of Success Program, we have seen our acceptance rates more than double!
The current system allows students to create a portfolio which not only captures basic biographical data but also career interests, majors, GPA, etc. In addition students can track and upload key information for their future applications including transcripts, essays, testing data and most importantly, their accomplishments and applicant strengths; all which may eventually be used as part of their graduate school applications. The components above are also used currently by the advising center, academic advisers as well as employers for a very user-friendly snapshot of a student’s current strengths and preparedness. The system also allows students to electronically request letters of recommendation which can be incorporated into the committee letter mentioned previously.
The reason for a re-design is that, although the Student Portfolio System is in a working state and very beneficial for students, a recent review of the system highlighted key concerns which negatively impact prospects for future addition of important features. These concerns also include a need for better documentation and maintenance standards, creating clearly defined test plans for portfolio functions and improving search and validation features. Below are key points of concern.
- The current source code is written in ColdFusion. For the longevity of the portfolio system, we would like to consider utilization of an open source programming language Due to internal interest as well as that from outside the university for customizable portfolio features, decisions relating to programming language and features should include consideration of those most readily available and utilized by consumers.
- The redesign should accommodate a variety of academic programs. Consideration should be given to marketing this product to other universities under a fee for service model or as a software product for sale and distribution. The new Health PAC system should be available for use by NCSU Pre-Healthcare Professional students and able to be tailored for other pre-professional areas such as law, vet medicine, plus masters and PhD applicants. We have had requests from at least 8 major colleges and universities to access our portfolio system and programming for their own students. In the fall of 2012, a CSC senior design team worked on the reprogramming of the student side of the program using PHP on Zend Framework 2.0 with good success.
The goal of this senior design project team is to review the current student interface of the portfolio system and utilize this platform in the redesign, incorporating current features and desired enhancements. It should be noted that the team is NOT expected to duplicate the entire existing system but to focus on the administrative components of the system with an emphasis on quality. It is hoped that the redeveloped administrative components created by a Spring senior design team can be combined with the work of the fall 2012 to provide Health PAC with a solid foundation on which portfolio system revisions and the subsequent redesign of our portfolio system can be completed by fall 2013.
Back to the top...
Applications for Lexmark MultiFunction Peripherals
Lexmark MultiFunction Peripherals (MFP) provide users with the expected functions of printing, scanning, copying and faxing. In additions, these MFPs have an embedded Java Virtual Machine (JVM) that allows users to download and execute applications that extend the MFP’s capabilities far beyond the basic functions. Lexmark provides a Software Development Kit (SDK) that gives application developers access to virtually all of the MFP capabilities. This includes network and internet, USB, touch screen, buttons, RAM, flash storage as well as pre-existing workflows such as scan and email.
In this project, students will be provided with the SDK and will develop and run one or more applications for Lexmark MFPs. Application suggestions will be provided but students may choose a topic on their own. Also, the scope of an application will determine if developing the one is adequate or if there should be more than one.
The following are some examples of applications developed in the past.
Testing and grading application. A teacher uses this app to create customized bubble answer sheets to administer a test. Each answer sheet contains the student’s name and a machine readable bar code. To grade the test, an answer key is placed on top of the stack of answer sheets and scanned in. Optical Mark Recognition (OMR) identifies the student’s answers and bar code recognition software identifies the student. The grades are entered into a database and a summary report of overall class performance is printed. A commercial, server based version of this app is currently in use by the New York City Department of Education, the largest in the country.
Email Phrasebook application. When scanning a document to send as an email attachment, it is often desirable to include explanatory information in the body of the email. This application provides the user with a database of commonly used phrases that can be selected for inclusion in the subject line or message body. The database also includes foreign language translations of the phrases so that email sent internationally can contain the phrases in the recipient’s native language. Additionally, the user prompts can be images of text that were generated on a workstation in a language other than English. This allows the application to be used by non English speaking users throughout the world.
Color correction for color vision deficiency: Approximately 8% of the population has some level of Color Vision Deficiency (CVD). Commonly called color blindness, CVD makes some colors hard to discriminate even though they are obviously different to individuals with normal color vision. This can cause great confusion in reading documents where information is color coded such as in pie charts. In this app, a document such as a pie chart is scanned and a search is made for pairs of color regions that could be confused by those with CVD. One of those colors is changed to make them discriminable and the document is printed. Care must be taken to ensure that the change does not create a different confusing pair and also that it is small enough to preserve the intended experience for viewers with normal vision.
Concierge application. Upscale hotels provide their guests with the services of a Concierge. This is a person who can make recommendations about local restaurants, provide directions to local attractions, theater times, guided tour suggestions and many other guest services. Budget conscious hotels can provide many of these same services by having a Lexmark MFP in the lobby running a Concierge app. The app prompts the user for the type of service they need. This can include restaurant choices, walking tour maps, weather reports and a great many other services where the information and transactions are available over the internet. The app obtains the needed information, conducts the transaction if there is one, and prints the result.
Because the SDK is Lexmark confidential, students must agree not to make any unnecessary disclosure as to its contents.
Signed NDA & IP Agreements Required From Each Member of Student Team
Back to the top...
- Undergraduate Research 1
X86 IDE Development Project
This project consists of designing and developing an integrated program development environment for the CSC236 Computer Architecture and Assembler Language Class.
The components of the IDE consist of an Assembler that translates MASM x86 assembler source code into machine object code. A linker that combines object files into an executable program, and a simulator that runs the executable program. The simulator should also provide for reading and writing basic ASCII data from a file; and provide a mechanism for the programmer interact with the simulator to aid in isolating logic defects in the program.
CSC236 focuses on program efficiency, measured in the number of lines of code written and the number of lines of code executed. Thus the IDE must be able to report those numbers for programs.
For the subset of MASM x86 functions implemented, the IDE must produce results that match running on an x86 platform.
The system must run on multiple platforms (Windows 32 and 64, MAC, Linux). How that is done is an open item. Some options include compiling to specific platforms or building a WEB based system.
The language used to write the IDE is open.
The source assembler language the IDE must process is MASM 8086. The input source code will be in MASM format. Only a subset of the full MASM 8086 language will need to be built. It will include the most commonly used instructions and addressing modes; these will be provided by the sponsor. The design of the IDE should be modular to allow expansion of the function at a later time.
The standard format for source input is an ASCII text program.asm file. The standard format for machine code is a program.obj file. The standard format for an executable is a program.exe file. The IDE must be able to read input source ASCII files. However, since this IDE is self contained it does not need to use the .obj or .exe file formats; the development team may use their own internal formats.
The linker function is open. The real need is the following. For some programs the student writes the whole program (they write 100% of the source code). However, for some programs, the instructor provides a main driver program and the student writes a subroutine. The instructor does not wish to provide the student with the source code for the driver. Currently he provides a .obj file that the student links with their .ob file. The instructor is open to other implementations for combining an instructor module with a student’s module.
The simulator must simulate reading and writing basic character ASCII data from a file; whether that is expanded to reading and writing the keyboard and display is open.
Critical issues to the sponsor
It is 100% guaranteed that the sponsor will need to maintain the system when the development team graduates. Thus the critical requirements are:
- The code must be clearly and fully documented so it can be maintained.
- The design of the system should be straightforward and not convoluted. Simplicity trumps all other factors.
- There should be an automated batch build process that builds an IDE from the source code.
- There should be a test suite of programs that can be used to verify the operation of a new IDE build.
Back to the top...
- Undergraduate Research 2
Algorithm Animation Tools in the Context of a Universal Graph Creation, Editing, and Drawing System
A large variety of software is available for viewing animations of graph algorithms. Most of it is either based on (a) animating a single example, or (b) allowing a user to create/modify a graph and then run one algorithm from a limited collection on it (there is often only one algorithm in the collection). There are two primary long-term goals for the project:
- Develop an animation system/tool that not only allows a user to edit graphs, but also to create animations of different algorithm using a simple high-level programming language with embedded calls to the animator.
- Make the components of the system, particularly the editor and the animator, independent and connected to other parts via remote procedure calls or formatted messages so that any component can be written in any programming language and provide any user interface (e.g., an accessible one for blind and low vision users).
The first goal has already been implemented by GDR, a crude algorithm animation tool that was state of the art in the late 1980’s and is still used today in CSC 505 to illustrate graph algorithms. It suffers, however, from several disadvantages.
- All components are written in C and require an X11 window interface.
- The components – the graph editor and the creation of animations – are all embodied in a single program.
- The user interface, while functional, is very crude and would be difficult to improve in the context of the existing design.
- Graphs are shown in black and white. Thus animations requiring more than two colors to distinguish vertices and edges having different properties cannot be implemented. Many GDR animations work around this restriction by manipulating the labels on vertices and edges.
GDR is a useful starting point, not from an implementation point of view, but as an illustration of the capabilities under the first goal. A reasonable semester project might be to implement a graph editor and an animation of a single algorithm such as depth-first search. The deliverable should satisfy the objectives inherent in the first goal while addressing the disadvantages of GDR.
Note: This new, probably ongoing, project is called Galant (Graph algorithm animation tools), which has a nice definition in English – look it up.
Back to the top...
- Undergraduate Research 3
A Virtual Whiteboard for Senior Design
There are many approaches to organizing project data and making it available to members of a team committed to advancing goals of that project. Various websites - wikis, project portals, project dashboards, etc., - have been created for this purpose. This variety of forms has come about because of the inherent notion that “one size does not fit all.” Unique settings for each team and each project’s structure demands customization to provide efficiency of access and consistency of presentation of related team data. The goal of this project is to create a “Virtual Whiteboard System” suitable for senior design teams to capture and share information pertinent to their capstone projects. The Virtual Whiteboard System (VWS) should be web-based and permit postings of diagrams, notes, pictures, etc. In general, permit posting of any form of communication that members of a team might be inclined to post to a physical whiteboard: tentative project requirements, brainstormed project designs, overall project goals and status, etc. The VWS should accommodate a number of individual team whiteboards, guaranteeing private access to each team’s virtual whiteboard only by members of each team and team mentors. Instructors should be given view and post access to all whiteboards, and also provided with a whiteboard search function so that, for example, an instructor might request any postings related to project design for Team A, Team B and Team X.
Development of detailed requirements, choice of implementation technology and all related project elements are entirely the provenance of the team. The semester goal is to build and demonstrate and document a prototype virtual whiteboard system.
Back to the top...
Bronto offers a powerful web interface for its online marketing platform. From this interface, Bronto’s corporate clients can design and execute new marketing campaigns and analyze the effectiveness of existing campaigns.
You, the NCSU team, will design and implement BroBot, a testing bot for Bronto’s web interface. Using technologies like Selenium, Cucumber, jQuery, and straight up programming in JS, PHP, Java, or Python you will build a bot that uses Bronto's software the same way a human would. The BroBot won’t need direct access to the Bronto codebase. Instead, you may design it to access the application via REST calls and automated headless browser interactions. Alternatively, you may take advantage of Bronto’s account with saucelabs, automate a regular browser and record videos of the bot using our software a la https://saucelabs.com/.
The BroBot is something we'd like to continually improve and expand, adding more abilities and scripted tests. But, it does not yet exist. You will be the first team to work on the BroBot. It needs a framework for running tests, developer tools to enable Bronto engineers to add more tests to it, reporting tools to show us what problems it did or did not find in our software (e.g., a wallboard display for the results of the most recent beating BroBot gave our app), and it definitely needs a mascot.
We have definite goals and a very definite need for this technology, but beyond the basics you'll have some leeway to choose what development directions you'd like to focus on for the semester. You'll need to do some server-side work to get the first BroBot up and running, but after that you can be a team of great client-side programmers, using JS to simulate a real user clicking on buttons and checking graphical reports, or a team of command line junkies pounding our APIs and charting our throughput, or a mix of skill sets covering the spectrum of web development and advancing in parallel. We'll figure it all out when we meet you.
Bronto Software provides the leading marketing platform for online and multi-channel retailers to drive revenue through email, mobile and social campaigns. Over 1000 organizations including Party City, Etsy, Gander Mountain, Dean & Deluca, and Trek Bikes rely on Bronto to increase revenue through interactive marketing.
Bronto has recently won several awards:
- NCTA 21 Award for Software Company of the Year 2011
- Stevie Award for Best Customer Service Department in 2009 and 2010
- CODIE Finalist for Best Marketing Software in 2011
- Best Place to Work by Triangle Business Journal in 2010 and 2011
- CED Companies to Watch in 2010
In 2002, Bronto was co-founded by Joe Colopy and Chaz Felix out of Joe's house in Durham, North Carolina. Since its humble beginnings, Bronto has emerged as a leader with a robust yet intuitive marketing platform for commerce-driven marketers.
Bronto's long-term focus on its customers, products and employees is now resulting in accelerated growth - its 60% growth in 2010 contributed to being listed as one of Inc Magazine's Top 100 fastest growing software companies.
Back to the top...
- Duke Energy
Energy Conservation Competition
Duke Energy is a proponent of energy conservation among our customers. Reduced energy consumption across the Duke Energy service territory results in lower bills for customers and decreased carbon emissions. Duke Energy regularly offers tips for lowering bills, such as using CFLs; identifying energy “vampires” in the home, that use electricity even when off; and providing guidance on air conditioning and heating. Another way to reduce electricity use is to purchase appliances that are energy efficient. The students will create an application that promotes efficient energy usage. The application will simulate a competition among players to see who can do the most to minimize their energy usage. Players will be rewarded for enacting efficiency measures and seeing a decline in energy usage over time. Additionally, the application will consume daily feeds of players’ home energy usage. This data will be available in graphical form in the application, and can be compared to other players. The application should take the form of a game and have a social media component, where results can be shared. It should be available as a standard website and on mobile devices (Droid, iPhone). The students should be as creative as possible with the application, i.e. draw on social media and gaming experiences to make the game fun, widely used and a helpful tool for lowering home energy usage.
Back to the top...
- Fujitsu America, Inc.
WCF Mapping Tool
Fujitsu America is one of the top three suppliers of retail systems and services worldwide. Using Microsoft’s Distributed interNet Architecture (DNA), these systems offer a high performance yet open platform that retailers as diverse as Nordstrom and Payless Shoesource are able to customize.
Customizing these systems requires mapping between the business process and technologies of a client and the business objects defined within the Fujitsu Retail Suite (FRS). For example, a particular client may maintain pricing information in an Oracle database. In the FRS, this database might be used to create Price Lookup Records that are fed to the Fujitsu middleware tool, StoreCENTER. From there, pricing information can be delivered to the client’s stores. Of course, the FRS supports many types of business objects, and StoreCENTER will normally be fed from many different data sources.
The NCSU team will create a WCF tool in C# to assist in this aspect of customizing the FRS for a retailer, helping to extract business data from a client’s existing information infrastructure and offering it to StoreCENTER. The tool will prepare client data as XML messages based on existing and new FRS business object schemas (xsd’s). The XML messages generated will be “wrapped” in a standard envelope and pushed as a SOAP message into StoreCENTER. The tool will process StoreCENTER responses and provide meaningful feedback to the user.
It is not essential for the new tool to support all business objects defined in the FRS. In fact, these objects can be expected to change periodically to offer new features or respond to client needs. Instead, it is important for the team to design the tool in a way that makes it easy to expand the set of supported business objects or to modify existing ones.
Back to the top...
Background for Project
Current internet search capabilities do not address consumer needs to identify service providers who have what they are looking for easily and quickly. A consumer can spend hours researching on the internet only to have to spend more time making phone calls and talking to people to get the information they need to make a decision.
Services such as Angie’s List, Yelp and Care.com fall short of taking care of making internet research easier. They do not present all available vendors in an area and due to their revenue models are reliant upon consumer reviews to populate their database. A consumer still needs to utilize multiple sites and spend a lot of time just to find providers who offer what they are looking within their specifications (i.e. location, price, reputation, age ranges, etc)
Recent “daily deal” service such as Groupon, Living Social and YipIt only add to the confusion and frustration, not to mention flooding consumer emails with irrelevant services. While these are valuable marketing tools, we want to give the consumer a way to receive targeted deals.
Smart Search requires multiple technology capabilities to enable the service to be rapidly developed and presented to consumers with as much service provider data as possible without directly contacting the service providers. The data needs to be found and brought into data tables a structured in a manner which allows for rapid data analysis against consumer selected criteria to present the results in an easy to read format.
Smart Search also needs to be agile in the data presented to be able to allow the consumer membership to have an active role in refining the category and filter selections. We want the consumer to feel as if they are part of the “design team”.
- Utilize Convertigo technology for web crawler capability which scours the internet pulls in specific data from websites and places that data into pre-defined categories for Smart Search
- Data Categorization and Presentation
- Smart Search Website Design
- Consumer Features
- 3 Level Category Search - each level drills down into the data to refine the results
- Service Provider Presentation (see attachment “A” example) – presentation of service providers who match criteria selected by the consumers in order of % matched.
- Consolidated Consumer Reviews – in each Service Provider Presentation we will show a section who has pulled in all available consumer reviews and summarizes those for the consumer
- Consolidate “Daily Deal” - in each Service Provider Presentation we will show a section that has pulled in all available daily deals for that provider or a deal the provider has added directly to the site.
- Location Radius with link to Google Maps
- Consumer Membership – create consumer membership page with the following options
- Preferred Category Selection – allow consumers select categories and receive updates about vendors in the categories they select (new vendors added, specials added, new services, etc)
- The consumer gets to set the parameters on what notifications to receive and how often
- Agile Consumer Feedback on Smart Search capabilities
- Service Provider Features
- Service Provider Membership – create page to allow vendors to pay to become members
- Select categories to appear in
- Provide complete information about their business, fill in all data fields used by Smart Search
- Send Quarterly email reminders to all member providers with a link to check, update and validate their listing information is correct
- Add a Special Feature
- Add a special they are running to Smart Search for a fee to run for 30, 60 or 90 days
- Notify Smart Search of specials on other services to be added to their listing
- Request “Ad Campaign” using Smart Search Consumer Membership data (at a fee)
- Receive competitor notifications when new vendor in same category is added or a vendor in same category has a special running.
- Vendor Advertising on website – vendors can choose to advertise with Smart Search, need to work out how they do this
- Enable email ad campaigns through the website
This project should produce a usable prototype which can be used to launch the service in a single category to showcase the capabilities and consumer value. The key to the prototype is to build the capability to mine as much data as possible to complete the data sets required for Smart Search without having to contact the Service Providers directly.
Since 1984, I-Cubed has provided the people, products and processes to extend the value of customers’ enterprise systems. The company’s subject matter expertise in Enterprise Applications, Product Lifecycle Management Consulting, Business Process Consulting and Web Content Management provides unique insights that translate customer needs into commercial products. I-Cubed’s product and services portfolio helps customers accelerate the integration of enterprise systems and collaborate securely throughout the supply chain. I-Cubed has been sponsoring senior design projects for more than 10 years. I-Cubed’s office is conveniently located on NC State’s Centennial campus in Venture 11.
Back to the top...
Automated Text and Image Extraction from PDF and TIF Mechanical Drawings
Objective: Design and implement a Windows application that performs bulk extraction of dimension and text information from PDF/TIF-formatted mechanical drawings.
Background: Mechanical drawings are created in various CAD systems (e.g. SolidWorks, AutoCAD, etc…) but are typically saved to PDF or TIF as a neutral format before sharing outside of engineering. In the case of PDF, some of the drawings have selectable text embedded in the PDF file. For these selectable-text files, text can be highlighted and copied by the user using standard Windows copy commands (e.g. CTRL-C). In the case of TIF, text is never selectable and is instead part of the overall raster image representing the drawing. Some PDFs contain only raster images as well. For these raster-image files, text can only be copied after using optical character recognition (OCR) to extract the text from the raster-image(s).
- Open both an editable text and raster-image mechanical drawing in PDF and/or TIF format and display them to the user.
- Enable the user to choose an existing region on the drawing or specify a region (e.g. dragging a box over an area) on which to extract data.
- Extract the data contained in the region chosen for extraction. The data to be extracted includes:
- A list of all drawing elements (e.g. dimensions, notes, etc…) found within the region
- For each drawing element;
- Text displayed on the drawing
- An image of the drawing element
- The X, Y coordinates of the bounding box surrounding the drawing element (e.g. x1,y1,x2,y2).
- Present the extracted data to the user in a grid as a list of extracted images, text and locations.
Figure 1 - Mechanical drawing in PDF format.
Figure 2 - Drawing element showing x,y coordinates of element's bounding box.
Figure 3 - Mechanical drawing with bounding boxes highlighted. Note: Not required for project. For illustration purposes only.
Figure 4 - Extracted image of drawing element.
Figure 5 - List of drawing elements with extracted element images and x,y coordinates of element bounding boxes.
- Application will rely on Tesseract 3.x for Optical Character Recogniion
- Application will rely on GdPicture9 for imaging
- Application will be written in C# and/or C++.
We will provide:
- GDPicture9 imaging toolkit
- Tesseract 3.x libraries
- Sample Tesseract 3.x dictionary files
- Sample Mechanical drawings
Important Skills to Achieve Objective:
- C# Programming
- OCR Technology
- PDF Imaging
- Customer-driven development
InspectionXpert Corporation is a fast-growing software company locally headquartered in Apex, North Carolina. We are the creators of the InspectionXpert product line quality inspection software to manufacturers and suppliers to the medical device, aerospace, automotive and energy industries. Our customers include the top companies and organizations in our target markets. Our customers include NASA, Los Alamos National Laboratories, SpaceX, Medtronic, Raytheon and others.
InspectionXpert Corporation was founded in 2004 by Jeff Cope (BSME-01) out of his house in Apex, North Carolina and was grown organically with no outside capital. The need to please customers from day one has helped us grow into a customer-driven company with the most user-friendly software in our industry.
Back to the top...
Theme Designer for HTML Web Applications
The goal of this project is to design and implement a Theme Designer for HTML using Dojo 1.8 technology.
Themes are used in Web application development to lend a consistent look and feel to widgets within an application and also between applications themselves. Making adjustments to the color, font or images are the most common techniques for customizing the theme.
SAS currently ships the SAS© Theme Designer for Flex which enables our customers to create and deploy simple, visually appealing customized themes for any of our SAS products that are implemented in Flex technology. The Theme Designer itself is implemented in Flex. It provides a simple user interface and instant preview feature that facilitates an iterative and exploratory design process. It also provides a one-step save-and-deploy feature that makes a new custom theme immediately available to all eligible applications and users.
Diagram 1: Screenshot of SAS Theme Designer for Flex
Within the designer, you can apply global settings to the entire application.
Diagram 2: Global settings that can be applied to the entire application.
You can also set things on a more granular level, such as for a specific component. Setting things at a component level will override any global settings specified.
Diagram 3: Settings specific to a Button component.
With every change you make to any of the user interface components that are exposed through the designer, you instantly see those changes reflected in the main content area of the designer. It provides a quick, visual confirmation of the theme you are creating every step of the way.
Once you are finished creating a custom theme or modifying an existing one, the designer provides the ability to instantly deploy the new theme. Existing users of the theme will automatically see the changes when they launch their application or refresh it if it’s already displayed.
Diagram 4: Tools Menu showing deployment options.
Full functional specifications will be provided at the start of the project; however, the idea is that the Themes Designer for Flex will serve as a reference application for the HTML version. Within the Dojo toolkit, there is an extensive set of widgets to choose from. We’ll identify which components the Theme Designer should support along with which component attributes should be configurable. Most likely, we’ll be working with the components in the Dijit package but might possibly use a few from the DojoX package as well.
Baseline functionality for the theme designer:
- The HTML Theme Designer should mimic as closely as possible the user interface and functionality offered in the SAS Theme Designer for Flex
- The application needs to support full keyboard accessibility (with JAWS support as a stretch goal)
- Localization should also be supported (with RTL as a stretch goal)
At the completion of the project, we’d expect to have
- Implementation of a theme designer for HTML using Dojo technology
- Well documented code
- End user documentation for the theme designer (content to be determined)
- Complete set of unit tests (tool to be decided)
- Set of automated tests using Selenium
Back to the top...
Social Data Analysis
The goal of the project is to leverage the Teradata Unified Data Architecture in order to find patterns in social data. The Teradata UDA incorporates structured, semi-structured and unstructured data for fueling business intelligence, analytics and applications. In this project, students will collect unstructured data from social networks including Foursquare, Facebook and Twitter, perform data analysis using the Aster analytical platform and provide the results of that analysis to the Teradata applications for use in marketing campaigns. In this project students will become familiar with NoSQL, Hadoop, MapReduce, SQL, social network APIs, relational databases and web services.
The project will be run within the Agile Scrum framework. Agile development provides the benefits of early and consistent customer engagement. The Teradata representative will serve as the Product Owner for the Scrum team in order to provide application requirements and to assist with backlog grooming and acceptance criteria. Development will occur in two week Sprints. Planning for tasks will occur at the beginning of the Sprint, checkpoints and backlog grooming will be scheduled mid-Sprint and demonstrations of progress will happen at the end of each Sprint in a Sprint review.
Teradata is the world's largest company focused on integrated data warehousing, big data analytics and business applications. Our powerful solutions portfolio and database are the foundation on which we’ve built our leadership position in business intelligence and are designed to address any business or technology need for companies of all sizes.
Only Teradata gives you the ability to integrate your organization’s data, optimize your business processes, and accelerate new insights like never before. The power unleashed from your data brings confidence to your organization and inspires leaders to think boldly and act decisively for competitive advantage. Learn more at teradata.com.
Back to the top...
- Undergraduate Research 4
A User Interface for Authoring Game-Based Narratives for Real Life Crimes
This project involves the development of a novel user interface for the creation of story scripts -- much like screenplays -- that will be used by an intelligent story generation system to create machinima within the IC-CRIME system. IC-CRIME is a game- based tool for use by crime scene investigators to model real-world crime scenes, link objects in the virtual scene to real world databases (e.g., fingerprint, hair and fiber) and collaborate with detectives and criminologists within a 3D virtual environment. One of IC-CRIME’s key features will be the ability for novice game players (e.g., detectives, prosecutors) to create, share and discuss cinematics that provide hypothetical explanations for the evidence observed in a crime scene (more information about IC-CRIME is available at http://iccrime.ncsu.edu).
Figure 1. A sample screenshot from an IC-CRIME collaborative session.
Our ongoing research effort is creating the AI engine that serves as the back-end for this story generation capability: fleshing out the details of a story, driving the story’s action within a game engine and controlling a 3D camera to film it. The system lacks a user interface, however, to allow IC-CRIME users to author the scripts used by the AI engine. Once the system has a usable UI, the system will provide insight into the relationship between expressive, mixed-initiative user interface designs for story telling and effective communication of action sequences using 3D game-based methods.
The project will involve students collaborating with the project PI (Young) and a graduate student also working on the IC-CRIME project. The implementation will use C# and be built on top of the Unity3D game engine. Because the project will focus on the user interface, no significant 3D programming skills will be required, and the existing IC-CRIME team of developers will be available to help the Senior Design team while they are learning the tools and environment.
The project may be readily divided into three elements. First, the project requires work to design, create and control the UI elements for specifying actions within the script (much like the drag- and-drop UI style used in interfaces like those in end-user authoring tools like Alice or Scratch). Second, the project requires the capability to translate from the UI elements being manipulated into the XML used to communicate with the AI engine back-end. Third, the system requires support for a flexible mixed-initiative interaction style, allowing the user to interrupt, revise, explore and review a script under development.
At project initiation, the Senior Design team will work with the IC-CRIME team to develop project requirements and be responsible for brainstorming to settle on a novel UI metaphor that can be supported by the underlying software toolkit/SDK. At the completion of the project, the Senior Design team will be included in the design and execution of an experimental evaluation for the system, including the user interface. This work could involve input on experimental design, data collection and statistical analysis.
Back to the top...
- Allied Telesis
Enhanced Solution for Tomorrow’s Home with the Intelligent Multi-service Gateway (iMG)
Allied Telesis is a network infrastructure/telecommunications company, formerly Allied Telesyn. Headquartered in Japan, their North American headquarters are in San Jose, CA. They also have an office on Centennial Campus. Founded in 1987, the company is a global provider of secure Ethernet & IP access solutions and an industry leader in the deployment of IP triple play (voice/video & data), a networks over copper and fiber access infrastructure.
Students are asked to develop a tool that has a web services front end to the iMG that consists of an extensible framework that will enable end users to perform device discovery, control, monitoring and management functions. The backend will incorporate an open source database (i.e. MySQL) that can store all the data necessary for the discovery, control, monitoring and management functions with extensibility in mind.
The project scope will also need to incorporate necessary security mechanisms. The web services should be compatible on all browsers and be HTML5 compliant. Control should allow the ability to activate, de-activate and change the device attributes on a time-of-day or event-based trigger. A dashboard is also desirable.
Back to the top...
Data Domain is the brand name of a line of disk backup products from EMC. Data Domain systems provide fast, reliable and space efficient on-line backup of files, file systems and databases ranging in size up to terabytes of data. These products provide network based access for saving, replicating and restoring data via a variety of network protocols (CIFS, NFS, OST). Using advanced compression and data de-duplication technology, gigabytes of data can be backed up to disk in just a few minutes and reduced in size by a factor of ten to thirty or more.
Our RTP Software Development Center develops a wide range of software for performing backups to and restoring data from Data Domain systems including libraries used by application backup and restore software to perform complete, partial, and incremental backups and restores using these backups.
With ever larger amounts of data to backup during constantly decreasing backup time periods, performance and especially scalability of performance is critical. This project will analyze the performance effects of a potential improvement to data read operations in a WAN environment that will reduce network bandwidth usage by compressing the data sent from the Data Domain system to the client application. The purpose is to answer questions such: What is the additional resource load on the client due to decompressing the received data? What factors (compression algorithm, kind of data, etc.) affect decompression performance and in what manner? What is the impact on read latency?
We want to develop an application level performance tool or application that will measure the effects of client-side data decompression during restore operations. This tool will act as an application that is restoring data from a Data Domain system. It will read either compressed or uncompressed files from the Data Domain system. For compressed files, the application will decompress the data. The application will compute or collect various statistics to enable comparison of the performance when decompressing the data to the base case when the read data is not decompressed. By varying various parameters (such as size of the read request, the type of data, or the compressibility of the data) the tool will enable analysis of decompression performance on the client system. The types of questions we want to answer include: What is the additional CPU load on the client due to decompressing the data? What is the change in latency of read operations when doing decompression? Is there an optimal data read size when doing decompression?
We then want to use this decompression analysis tool to measure, characterize and compare the performance of several decompression algorithms. We are also interested in seeing if there are tuning operations that can improve decompression performance.
The project is proposed as two phases: creating the performance analysis tool and then evaluating decompression performance using the tool.
Produce a decompression analysis tool with appropriate usage scripts that:
- Reads a specified uncompressed backup file, recording appropriate performance statistics.
- Reads a specified compressed backup file and decompresses the file data, recording appropriate performance statistics.
- Allows specification of various input and output parameters, such as size of read operations, type of decompression, etc.
- Generates detailed and summary performance statistics, charts, and / or graphs.
- Runs in a C/Linux environment, but is readily portable to other environments (e.g. Windows).
Phase 2 is an open ended performance measurement and analysis phase using the decompression analysis tool produced in Phase 1 to do one or more of the following. Which of these items are done will depend on the capabilities of the tool and the time and resources available after Phase 1 is completed.
- Determine the additional CPU resources used on the client system when decompressing data versus when no decompression is done.
- Determine how the additional CPU load due to decompression varies based on the size of each read data request.
- Determine the effect of decompression on data read latency.
- Do one or more of the above performance measurements or analyses on different client Linux platforms (e.g. Solaris, HP/UX, AIX).
- Use the decompression analysis tool to determine if any changes can be made in the decompression implementation to improve performance of any of the analyzed metrics.
- Others as suggested by the team or by their analysis.
- A Data Domain hardware loaner system including all necessary system software for reading backup files via a restore application for the duration of the project.
- Documentation for administering the Data Domain system.
- A set of binary libraries with documented Application Programming Interfaces (APIs) that can be linked with application software that acts as a restore application by calling the provided APIs.
Benefits to NC State Students
This project provides an opportunity to attack a real life problem covering the full engineering spectrum from requirements gathering through research, design and implementation and finally usage and analysis. This project will provide opportunities for creativity and innovation. EMC will work with the team closely to provide guidance and give customer feedback as necessary to maintain project scope and size. The project will give team members an exposure to commercial software development on state of the art industry backup systems.
Benefits to EMC
As storage usage worldwide continues to grow exponentially providing our customers with the features and performance they need to better protect and manage their data is critical. The demands of ever growing amounts of data and reliability complicate the roles of development engineers in designing and implementing new features and maintaining scalable performance in existing software. The proposed load generating tool and performance measurements based on this tool will provide a basis for architectural and design decisions in current and future versions of Data Domain backup application software and system software.
EMC Corporation is the world's leading developer and provider of information infrastructure technology and solutions. We help organizations of every size around the world keep their most essential digital information protected, secure, and continuously available.
We help enterprises of all sizes manage their growing volumes of information—from creation to disposal—according to its changing value to the business through big data analysis tools, information lifecycle management (ILM) strategies, and data protection solutions. We combine our best-of-breed platforms, software, and services into high-value, low-risk information infrastructure solutions that help organizations maximize the value of their information assets, improve service levels, lower costs, react quickly to change, achieve compliance with regulations, protect information from loss and unauthorized access, and manage, analyze, and automate more of their overall infrastructure. These solutions integrate networked storage technologies, storage systems, analytics engines, software, and services.
EMC's mission is to help organizations of all sizes get the most value from their information and their relationships with our company.
The Research Triangle Park Software Design Center is an EMC software design center. We develop world-class software that is used in our VNX storage, Data Domain backup, and RSA security products.
EMC where information lives.
Back to the top...
Interactions of the HTTP Adaptive Protocol with TCP
The HTTP Adaptive Streaming protocol has become recently a popular for broadcasting video content over the Internet. This protocol is quite different from server-based video transmission protocols, such as the Real-Time Protocol (RTP), and relies more on intelligent clients to make decisions on which content bit-rate the server should stream and when to switch these streams. The HTTP Adaptive protocol has recently been standardized by the 3rd Generation Partnership Project (3GPP), an international standards organization. The standardized protocol is known as the Dynamic Adaptive Streaming over HTTP (DASH), or MPEG-DASH. It is based on HTTP, and in view of this it uses the Transport Control Protocol (TCP) as the transport layer. This is why it is referred to as an Over-The-Top (OTT) protocol.
The goal of this project is to create a streaming media testbed using open source software in order to study the performance of the HTTP Adaptive, as described in the recent paper “Dynamic Adaptive Streaming over HTTP Dataset”, Proceedings of the Second ACM Multimedia Systems Conference (MMSys), pp 89-94 (2012). Operation of the testbed should be demonstrated by reproducing experiments described in the above paper.
A stretch goal is to explore the effect of TCP on the congestion control mechanism of HTTP Adaptive.
Extron Electronics is a leading manufacturer of professional AV system integration products. Worldwide Extron headquarters are located in Anaheim, CA with local offices in Raleigh, NC.
Back to the top...
- Fidelity Investments
Fidelity Investments has many QA groups automating their respective test cases using tools such as QTP, Selenium & SOATest. These test cases are executed by creating new/updating existing scripts some being VB/ Perl scripts. This kind of automation requires knowledge of Perl or VB scripting and also adds the task of maintaining such scripts along with the actual automation code in future. The goal of this project is to create integration among different automation tools and providing a simple and consistent way to update & maintain the automation.
Students undertaking this project will have to develop a web application to support:
- UI for adding/ registering the automation scripts
- Execute automation code on-demand or schedule-based
- Integrate with these automation tools to determine execution results
- Provide a final report on execution results such time taken, pass/fail etc
- RESTful web services API to request automation test execution
Back to the top...
- Iron Data
Iron Data provides government clients a number of software solutions to both automate and enhance process workflows. In general, clients have requested the ability to manage the schedules of workers and collect metrics on where time is both spent and allocated. Office productivity software can address this request however; current Iron Data solutions do not have this capability. We would like to build an easy to use solution.
The NCSU Senior Design team will create a calendaring application which integrates with Iron Data solutions, to allow government entities to allocate and to track workers time. The system will leverage the GRAILS platform and be implemented as a web interface.
- Understanding of web design concepts
- Understanding of Java/OOP concepts
The application will:
- Use GRAILS platform built on the Java Virtual Machine
- Add/Create/Update Calendar Users
- Maintain information about a user
- Schedule an appointment
- Respond to an appointment invite
- Cancel an appointment
- Reschedule (update) an appointment, and resend the invites
- View users meetings / schedules
- Create and maintain notes for appointments
Allow for user login, create/delete of events and ability to browse availability of other users
- Ability to reschedule appointments
- Integrate users into LDAP or Iron Data provider
For user authentication, use a LDAP server or Iron Data web service/DB schema.
For instance, team can leverage Spring LDAP: http://www.springsource.org/ldap
This will use an active directory (for instance) as a user repository, you then query it using the Spring LDAP framework and receive a confirmation that a user is valid.
Example code can be found here (SVN Repo)
Within trunk, take a look at /samples/samples-utils/
Here is another example:
Back to the top...
- NetApp 1
Enhance the server caching performance prediction tool, which was created by NCSU team in Fall 2012 (see Executive Summary from Fall 2012 Final Report, below), to predict the performance of an application given the following inputs:
- Working set size
- Cache size
- Workload characterization (% or reads versus writes)
- SSD performance: performance of a PCI-E SSD vs SSD using SATA/SAS interface
- Flash Accel performance: this is NetApp server caching software which NetApp can provide
Executive Summary (from Fall 2012 Final Report)
The goal of this project is to deliver a tool that will help match NetApp customers with an SSD to purchase for use with Flash Accel. Flash Accel is a new NetApp software feature that was recently announced for providing server-side flash-based caching of networked storage. Server applications that rely on heavy read cycles from NetApp Filers will benefit from Flash Accel by reducing the latency of applications to networked storage.
The primary function of our performance prediction tool is to allow customers to evaluate the benefit of different SSD sizes so they can adequately visualize the performance gains from a reduction in cache misses for a given workload. Our solution is implemented as a Python/Django web application that allows users to input their hardware configurations and performance goals. The hardware configurations are used to provide inputs for the predicted performance based on latency vs read/writes per second. The performance goals that are entered by the user are used to return a recommended set of SSDs that meet the performance requirements set forth.
The current performance model is based on benchmark performance data given by NetApp. Future development and testing of prediction models can be performed on the IBM X3650 M2
Server and a NetApp FAS2040 Filer which we received from NetApp. This will allow the web application results to be validated for the supplied hardware configuration and to deliver a process for validating additional configurations.
Back to the top...
- NetApp 2
API command line tool/shell, “zapish”
Many groups throughout NetApp use a lightweight command-line utility “apitest” to execute a single API function (“ZAPI”) on a NetApp system and get the output. apitest can be used manually, for example to test a single ZAPI command, or it can be called from within a script and its output parsed for the desired data. It typically works, but it lacks features, is unnecessarily verbose, and has minor bugs that remain unfixed. We would like to develop a new tool, named “zapish”, to replace apitest.
The new zapish tool should assist the user when entering a ZAPI command. To do this, it will most likely need a “shell” type of environment, similar to a typical command line (e.g. bash in Linux). Rather than having to refer to our ZAPI documentation, the user should be assisted in entering the ZAPI command and its parameters, for example with tab completion and possibly parameter value suggestion.
The output of zapish should be configurable. The current apitest tool just outputs the entire XML response of the ZAPI; we would like zapish to be able to take a query syntax (e.g. with XPath) so that it knows which field the user is interested in viewing. For example, if a network information ZAPI was executed, the user should be able to specify that zapish only output the IP address field.
Zapish should also be useful as an automated tool, for example if it is executed within a QA script. This will most likely mean it should have a set of command-line arguments to specify what ZAPI to execute, what result field to select, the details of the system to connect to, and so on.
Some ZAPIs are iterators; they must be executed multiple times, passing in a tag string, to iterate through a list of objects. As a stretch goal, we would like for zapish to be able to handle these iterators for the user. That is, rather than the user having to run the iter multiple times and copy-paste the tag string, zapish should make the multiple calls necessary and output to the user the entire collection (or the desired response fields as mentioned above).
Each version of NetApp’s Data ONTAP operating system has brought changes to the ZAPIs, and NetApp’s manageability tools also have their own ZAPIs. zapish should at least support one given ZAPI set (the latest Data ONTAP ZAPIs). As a stretch goal, we would like for zapish to be able to switch between multiple sets of ZAPIs, either automatically or by user configuration. This way it will be flexible enough to use on all different NetApp systems, rather than being tied to the ZAPI set of one ONTAP version.
Zapish should run on Linux operating systems at a minimum. We recommend either Java, Perl, or another cross-platform scripting language as the programming language, whichever the group is most experienced with.
Back to the top...
- Werum Software & Systems
PAS-X Maintenance Contract Management System
Werum provides a software product called PAS-X which tracks and manages the pharmaceutical production process. Each solution is a combination of many software components as well as client specific enhancements. This makes the management of maintenance contract tedious particularly given the various Service Level Agreements that are available to clients.
To allow for better management of maintenance contracts, Werum would like to develop a web based approach that describe all the elements of maintenance contracts as well as basic information related to each client solution. This solution would replace the paper-based manual process that is currently in use.
Two primary goals are expected (key success factors):
- Maintenance of all information related to maintenance contracts;
- Generation of all information needed to generate client invoices.
The following key requirements have been identified:
- Model client information for each PAS-X instance: client, site, solution, contacts, IT system description, VM system used for debugging, PAS-X version, etc.
- Model maintenance agreement in the form of: maintenance calculation model for licenses & enhancement (each one is different), service level agreement (hours of coverage), warranty period
- Model PAS-X solution delivered to client in the form of software components, where each component includes: PAS-X module, name, description, list price, maintenance basis fee, maintenance amount, acceptance date (start of warranty);
- Model CPI yearly adjustments, which adjust maintenance basis fees;
- Model monthly, quarterly or yearly invoicing process: ideally the application would generate invoices as pdf files that are the result of an XML file and a layout definition file (cascading style sheet); at a minimum, all the information needed to generate an invoice needs to be provided through a dialog;
- The solution needs to build on previous NCSU Senior Projects wherever applicable;
- The document of the developed solution needs to comply with Werum quality standards.
It is not unusual for a solution to contain 50 or 60 software components.
The following Service Level Agreements are defined for all customers (various rate formulas are attached to each SLA):
The warranty period is illustrated below:
The following is expected from the team:
- Detailed definitions of business & user requirements;
- Development of detailed functional specification of the solution;
- Development of the solution (Java);
- Development of test protocols;
- Formal testing of the application to demonstrate that it meets all requirements (formal test where Werum provides data to be tested & verifies correctness of application);
PAS-X is composed of multiple modules which are developed on different platforms and integrated into a single product. Primary required skills for this project will be familiarity with Java. Experience with SQL relational DBs is preferable. To provide the graphical view of the states, some experience with user interfaces or computer graphics is beneficial.
The solution should provide a Java based implementation. The solution should also be presented in a web based manner. This may be as a Java Applet or JSP web application. The implementation must prioritize using standard Java libraries and web standards.
Signed NDA & IP Agreements May Be Required From Each Member of Student Team
Back to the top...